Message ID | 20201016121007.2378114-1-a.nogikh@yandex.ru (mailing list archive) |
---|---|
State | Not Applicable |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | netem: prevent division by zero in tabledist | expand |
On Fri, Oct 16, 2020 at 2:10 PM Aleksandr Nogikh <a.nogikh@yandex.ru> wrote: > > From: Aleksandr Nogikh <nogikh@google.com> > > Currently it is possible to craft a special netlink RTM_NEWQDISC > command that result in jitter being equal to 0x80000000. It is enough > to set 32 bit jitter to 0x02000000 (it will later be multiplied by > 2^6) or set 64 bit jitter via TCA_NETEM_JITTER64. This causes an > overflow during the generation of uniformly districuted numbers in > tabledist, which in turn leads to division by zero (sigma != 0, but > sigma * 2 is 0). > > The related fragment of code needs 32-bit division - see commit > 9b0ed89 ("netem: remove unnecessary 64 bit modulus"), so > switching to 64 bit is not an option. > > Fix the issue by preventing 32 bit integer overflows in > tabledist. Also, instead of truncating s64 integer to s32, truncate it > to u32, as negative standard deviation does not make sense anyway. > > Reported-by: syzbot+ec762a6342ad0d3c0d8f@syzkaller.appspotmail.com > Signed-off-by: Aleksandr Nogikh <nogikh@google.com> > --- > net/sched/sch_netem.c | 15 ++++++++++++--- > 1 file changed, 12 insertions(+), 3 deletions(-) > > diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c > index 84f82771cdf5..d8b0bf1b5346 100644 > --- a/net/sched/sch_netem.c > +++ b/net/sched/sch_netem.c > @@ -315,7 +315,7 @@ static bool loss_event(struct netem_sched_data *q) > * std deviation sigma. Uses table lookup to approximate the desired > * distribution, and a uniformly-distributed pseudo-random source. > */ > -static s64 tabledist(s64 mu, s32 sigma, > +static s64 tabledist(s64 mu, u32 sigma, > struct crndstate *state, > const struct disttable *dist) > { > @@ -329,8 +329,14 @@ static s64 tabledist(s64 mu, s32 sigma, > rnd = get_crandom(state); > > /* default uniform distribution */ > - if (dist == NULL) > + if (!dist) { > + /* Sigma is too big to perform 32 bit division. > + * Use the widest possible deviation. > + */ > + if ((u64)sigma * 2ULL >= U32_MAX) Or simply if (sigma >= U32_MAX/2) > + return mu + rnd - U32_MAX / 2; Since only syzbot can possibly use these silly parameters, what about capping the parameters in control path, when netem is setup/changed, instead of adding a test in the fast path ?
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c index 84f82771cdf5..d8b0bf1b5346 100644 --- a/net/sched/sch_netem.c +++ b/net/sched/sch_netem.c @@ -315,7 +315,7 @@ static bool loss_event(struct netem_sched_data *q) * std deviation sigma. Uses table lookup to approximate the desired * distribution, and a uniformly-distributed pseudo-random source. */ -static s64 tabledist(s64 mu, s32 sigma, +static s64 tabledist(s64 mu, u32 sigma, struct crndstate *state, const struct disttable *dist) { @@ -329,8 +329,14 @@ static s64 tabledist(s64 mu, s32 sigma, rnd = get_crandom(state); /* default uniform distribution */ - if (dist == NULL) + if (!dist) { + /* Sigma is too big to perform 32 bit division. + * Use the widest possible deviation. + */ + if ((u64)sigma * 2ULL >= U32_MAX) + return mu + rnd - U32_MAX / 2; return ((rnd % (2 * sigma)) + mu) - sigma; + } t = dist->table[rnd % dist->size]; x = (sigma % NETEM_DIST_SCALE) * t; @@ -533,7 +539,10 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch, u64 now; s64 delay; - delay = tabledist(q->latency, q->jitter, + /* tabledist is unable to handle 64 bit jitters yet, so we adjust it beforehand */ + u32 constrained_jitter = q->jitter > 0 ? min_t(u32, q->jitter, U32_MAX) : 0; + + delay = tabledist(q->latency, constrained_jitter, &q->delay_cor, q->delay_dist); now = ktime_get_ns();