Message ID | 20201008041250.22642-1-xiyou.wangcong@gmail.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | [net] tipc: fix the skb_unshare() in tipc_buf_append() | expand |
On Thu, Oct 8, 2020 at 12:12 PM Cong Wang <xiyou.wangcong@gmail.com> wrote: > > skb_unshare() drops a reference count on the old skb unconditionally, > so in the failure case, we end up freeing the skb twice here. > And because the skb is allocated in fclone and cloned by caller > tipc_msg_reassemble(), the consequence is actually freeing the > original skb too, thus triggered the UAF by syzbot. Do you mean: frag = skb_clone(skb, GFP_ATOMIC); frag = skb_unshare(frag) will free the 'skb' too? > > Fix this by replacing this skb_unshare() with skb_cloned()+skb_copy(). > > Fixes: ff48b6222e65 ("tipc: use skb_unshare() instead in tipc_buf_append()") > Reported-and-tested-by: syzbot+e96a7ba46281824cc46a@syzkaller.appspotmail.com > Cc: Xin Long <lucien.xin@gmail.com> > Cc: Jon Maloy <jmaloy@redhat.com> > Cc: Ying Xue <ying.xue@windriver.com> > Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> > --- > net/tipc/msg.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/net/tipc/msg.c b/net/tipc/msg.c > index 52e93ba4d8e2..681224401871 100644 > --- a/net/tipc/msg.c > +++ b/net/tipc/msg.c > @@ -150,7 +150,8 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf) > if (fragid == FIRST_FRAGMENT) { > if (unlikely(head)) > goto err; > - frag = skb_unshare(frag, GFP_ATOMIC); > + if (skb_cloned(frag)) > + frag = skb_copy(frag, GFP_ATOMIC); > if (unlikely(!frag)) > goto err; > head = *headbuf = frag; > -- > 2.28.0 >
On Thu, Oct 8, 2020 at 1:45 AM Xin Long <lucien.xin@gmail.com> wrote: > > On Thu, Oct 8, 2020 at 12:12 PM Cong Wang <xiyou.wangcong@gmail.com> wrote: > > > > skb_unshare() drops a reference count on the old skb unconditionally, > > so in the failure case, we end up freeing the skb twice here. > > And because the skb is allocated in fclone and cloned by caller > > tipc_msg_reassemble(), the consequence is actually freeing the > > original skb too, thus triggered the UAF by syzbot. > Do you mean: > frag = skb_clone(skb, GFP_ATOMIC); > frag = skb_unshare(frag) will free the 'skb' too? Yes, more precisely, I mean: new = skb_clone(old) kfree_skb(new) kfree_skb(new) would free 'old' eventually when 'old' is a fast clone. The skb_clone() sets ->fclone_ref to 2 and returns the clone, whose skb->fclone is SKB_FCLONE_CLONE. So, the first call of kfree_skbmem() will just decrease ->fclone_ref by 1, but the second call will trigger kmem_cache_free() which frees _both_ skb's. Thanks.
On Fri, Oct 9, 2020 at 1:45 AM Cong Wang <xiyou.wangcong@gmail.com> wrote: > > On Thu, Oct 8, 2020 at 1:45 AM Xin Long <lucien.xin@gmail.com> wrote: > > > > On Thu, Oct 8, 2020 at 12:12 PM Cong Wang <xiyou.wangcong@gmail.com> wrote: > > > > > > skb_unshare() drops a reference count on the old skb unconditionally, > > > so in the failure case, we end up freeing the skb twice here. > > > And because the skb is allocated in fclone and cloned by caller > > > tipc_msg_reassemble(), the consequence is actually freeing the > > > original skb too, thus triggered the UAF by syzbot. > > Do you mean: > > frag = skb_clone(skb, GFP_ATOMIC); > > frag = skb_unshare(frag) will free the 'skb' too? > > Yes, more precisely, I mean: > > new = skb_clone(old) > kfree_skb(new) > kfree_skb(new) > > would free 'old' eventually when 'old' is a fast clone. The skb_clone() > sets ->fclone_ref to 2 and returns the clone, whose skb->fclone is > SKB_FCLONE_CLONE. So, the first call of kfree_skbmem() will > just decrease ->fclone_ref by 1, but the second call will trigger > kmem_cache_free() which frees _both_ skb's. Thanks. Didn't notice kfree_skb 'buf' on the err path. Reviewed-by: Xin Long <lucien.xin@gmail.com>
On Wed, 7 Oct 2020 21:12:50 -0700 Cong Wang wrote: > skb_unshare() drops a reference count on the old skb unconditionally, > so in the failure case, we end up freeing the skb twice here. > And because the skb is allocated in fclone and cloned by caller > tipc_msg_reassemble(), the consequence is actually freeing the > original skb too, thus triggered the UAF by syzbot. > > Fix this by replacing this skb_unshare() with skb_cloned()+skb_copy(). > > Fixes: ff48b6222e65 ("tipc: use skb_unshare() instead in tipc_buf_append()") > Reported-and-tested-by: syzbot+e96a7ba46281824cc46a@syzkaller.appspotmail.com > Cc: Xin Long <lucien.xin@gmail.com> > Cc: Jon Maloy <jmaloy@redhat.com> > Cc: Ying Xue <ying.xue@windriver.com> > Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Applied and queued for stable, thank you!
diff --git a/net/tipc/msg.c b/net/tipc/msg.c index 52e93ba4d8e2..681224401871 100644 --- a/net/tipc/msg.c +++ b/net/tipc/msg.c @@ -150,7 +150,8 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf) if (fragid == FIRST_FRAGMENT) { if (unlikely(head)) goto err; - frag = skb_unshare(frag, GFP_ATOMIC); + if (skb_cloned(frag)) + frag = skb_copy(frag, GFP_ATOMIC); if (unlikely(!frag)) goto err; head = *headbuf = frag;
skb_unshare() drops a reference count on the old skb unconditionally, so in the failure case, we end up freeing the skb twice here. And because the skb is allocated in fclone and cloned by caller tipc_msg_reassemble(), the consequence is actually freeing the original skb too, thus triggered the UAF by syzbot. Fix this by replacing this skb_unshare() with skb_cloned()+skb_copy(). Fixes: ff48b6222e65 ("tipc: use skb_unshare() instead in tipc_buf_append()") Reported-and-tested-by: syzbot+e96a7ba46281824cc46a@syzkaller.appspotmail.com Cc: Xin Long <lucien.xin@gmail.com> Cc: Jon Maloy <jmaloy@redhat.com> Cc: Ying Xue <ying.xue@windriver.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> --- net/tipc/msg.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)