Message ID | 1453976119-24372-5-git-send-email-alex.bennee@linaro.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 28/01/2016 11:15, Alex Bennée wrote: > Running "make check" under ThreadSanitizer pointed to a number of > potential races. > > - use atomic_mb_read to check bh->schedule > - use atomic_set/read primitives to manipulate bh->idle > > The bh->idle changes are mostly for the benefit of explicit marking for > tsan but the code in non-sanitised builds should be the same. These are harmless because of the aio_notify calls that happen after bh->scheduled is written, but they do make the code easier to understand. Acked-by: Paolo Bonzini <pbonzini@redhat.com> > Signed-off-by: Alex Bennée <alex.bennee@linaro.org> > --- > async.c | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > diff --git a/async.c b/async.c > index e106072..a9cb9c3 100644 > --- a/async.c > +++ b/async.c > @@ -85,10 +85,10 @@ int aio_bh_poll(AioContext *ctx) > */ > if (!bh->deleted && atomic_xchg(&bh->scheduled, 0)) { > /* Idle BHs and the notify BH don't count as progress */ > - if (!bh->idle && bh != ctx->notify_dummy_bh) { > + if (!atomic_read(&bh->idle) && bh != ctx->notify_dummy_bh) { > ret = 1; > } > - bh->idle = 0; > + atomic_set(&bh->idle, 0); > aio_bh_call(bh); > } > } > @@ -128,7 +128,7 @@ void qemu_bh_schedule(QEMUBH *bh) > AioContext *ctx; > > ctx = bh->ctx; > - bh->idle = 0; > + atomic_set(&bh->idle, 0); > /* The memory barrier implicit in atomic_xchg makes sure that: > * 1. idle & any writes needed by the callback are done before the > * locations are read in the aio_bh_poll. > @@ -165,8 +165,8 @@ aio_compute_timeout(AioContext *ctx) > QEMUBH *bh; > > for (bh = ctx->first_bh; bh; bh = bh->next) { > - if (!bh->deleted && bh->scheduled) { > - if (bh->idle) { > + if (!bh->deleted && atomic_mb_read(&bh->scheduled)) { > + if (atomic_read(&bh->idle)) { > /* idle bottom halves will be polled at least > * every 10ms */ > timeout = 10000000; > @@ -286,7 +286,7 @@ void aio_notify(AioContext *ctx) > * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll. > */ > smp_mb(); > - if (ctx->notify_me) { > + if (atomic_read(&ctx->notify_me)) { > event_notifier_set(&ctx->notifier); > atomic_mb_set(&ctx->notified, true); > } >
diff --git a/async.c b/async.c index e106072..a9cb9c3 100644 --- a/async.c +++ b/async.c @@ -85,10 +85,10 @@ int aio_bh_poll(AioContext *ctx) */ if (!bh->deleted && atomic_xchg(&bh->scheduled, 0)) { /* Idle BHs and the notify BH don't count as progress */ - if (!bh->idle && bh != ctx->notify_dummy_bh) { + if (!atomic_read(&bh->idle) && bh != ctx->notify_dummy_bh) { ret = 1; } - bh->idle = 0; + atomic_set(&bh->idle, 0); aio_bh_call(bh); } } @@ -128,7 +128,7 @@ void qemu_bh_schedule(QEMUBH *bh) AioContext *ctx; ctx = bh->ctx; - bh->idle = 0; + atomic_set(&bh->idle, 0); /* The memory barrier implicit in atomic_xchg makes sure that: * 1. idle & any writes needed by the callback are done before the * locations are read in the aio_bh_poll. @@ -165,8 +165,8 @@ aio_compute_timeout(AioContext *ctx) QEMUBH *bh; for (bh = ctx->first_bh; bh; bh = bh->next) { - if (!bh->deleted && bh->scheduled) { - if (bh->idle) { + if (!bh->deleted && atomic_mb_read(&bh->scheduled)) { + if (atomic_read(&bh->idle)) { /* idle bottom halves will be polled at least * every 10ms */ timeout = 10000000; @@ -286,7 +286,7 @@ void aio_notify(AioContext *ctx) * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll. */ smp_mb(); - if (ctx->notify_me) { + if (atomic_read(&ctx->notify_me)) { event_notifier_set(&ctx->notifier); atomic_mb_set(&ctx->notified, true); }
Running "make check" under ThreadSanitizer pointed to a number of potential races. - use atomic_mb_read to check bh->schedule - use atomic_set/read primitives to manipulate bh->idle The bh->idle changes are mostly for the benefit of explicit marking for tsan but the code in non-sanitised builds should be the same. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> --- async.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)