Message ID | 99a0ff7379a345ed60a6092866c9a42f667b37b5.1476450059.git.berto@igalia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 14/10/2016 15:08, Alberto Garcia wrote: > bdrv_drain_all() doesn't allow the caller to do anything after all > pending requests have been completed but before block jobs are > resumed. > > This patch splits bdrv_drain_all() into _begin() and _end() for that > purpose. It also adds aio_{disable,enable}_external() calls to disable > external clients in the meantime. > > Signed-off-by: Alberto Garcia <berto@igalia.com> > --- > block/io.c | 24 +++++++++++++++++++++--- > include/block/block.h | 2 ++ > 2 files changed, 23 insertions(+), 3 deletions(-) > > diff --git a/block/io.c b/block/io.c > index b136c89..9418f8b 100644 > --- a/block/io.c > +++ b/block/io.c > @@ -275,8 +275,11 @@ void bdrv_drain(BlockDriverState *bs) > * > * This function does not flush data to disk, use bdrv_flush_all() for that > * after calling this function. > + * > + * This pauses all block jobs and disables external clients. It must > + * be paired with bdrv_drain_all_end(). > */ > -void bdrv_drain_all(void) > +void bdrv_drain_all_begin(void) > { > /* Always run first iteration so any pending completion BHs run */ > bool busy = true; > @@ -300,6 +303,7 @@ void bdrv_drain_all(void) > bdrv_parent_drained_begin(bs); > bdrv_io_unplugged_begin(bs); > bdrv_drain_recurse(bs); > + aio_disable_external(aio_context); > aio_context_release(aio_context); > > if (!g_slist_find(aio_ctxs, aio_context)) { > @@ -333,17 +337,25 @@ void bdrv_drain_all(void) > } > } > > + g_slist_free(aio_ctxs); > +} > + > +void bdrv_drain_all_end(void) > +{ > + BlockDriverState *bs; > + BdrvNextIterator it; > + BlockJob *job = NULL; > + > for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { > AioContext *aio_context = bdrv_get_aio_context(bs); > > aio_context_acquire(aio_context); > + aio_enable_external(aio_context); > bdrv_io_unplugged_end(bs); > bdrv_parent_drained_end(bs); > aio_context_release(aio_context); > } > - g_slist_free(aio_ctxs); > > - job = NULL; > while ((job = block_job_next(job))) { > AioContext *aio_context = blk_get_aio_context(job->blk); > > @@ -353,6 +365,12 @@ void bdrv_drain_all(void) > } > } > > +void bdrv_drain_all(void) > +{ > + bdrv_drain_all_begin(); > + bdrv_drain_all_end(); > +} > + > /** > * Remove an active request from the tracked requests list > * > diff --git a/include/block/block.h b/include/block/block.h > index 107c603..301d713 100644 > --- a/include/block/block.h > +++ b/include/block/block.h > @@ -338,6 +338,8 @@ int bdrv_flush_all(void); > void bdrv_close_all(void); > void bdrv_drain(BlockDriverState *bs); > void coroutine_fn bdrv_co_drain(BlockDriverState *bs); > +void bdrv_drain_all_begin(void); > +void bdrv_drain_all_end(void); > void bdrv_drain_all(void); > > int bdrv_pdiscard(BlockDriverState *bs, int64_t offset, int count); > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Am 14.10.2016 um 15:08 hat Alberto Garcia geschrieben: > bdrv_drain_all() doesn't allow the caller to do anything after all > pending requests have been completed but before block jobs are > resumed. > > This patch splits bdrv_drain_all() into _begin() and _end() for that > purpose. It also adds aio_{disable,enable}_external() calls to disable > external clients in the meantime. > > Signed-off-by: Alberto Garcia <berto@igalia.com> This looks okay as a first step, possibly enough for this series (we'll have to review this carefully), but it leaves us with a rather limited version of bdrv_drain_all_begin/end that excludes many useful cases. One of them is that John wants to use it around QMP transactions. Specifically, you can't add a new BDS or a new block job in a drain_all section because then bdrv_drain_all_end() would try to unpause the new thing even though it has never been paused. Depending on what else we did with it, this will either corrupt the pause counters or just directly fail an assertion. My first thoughts were about how to let an unpause succeed without a previous pause for these objects, but actually I think this isn't what we should do. We rather want to actually do the pause instead because even new BDSes and block jobs should probably start in a quiesced state when created inside a drain_all section. This is somewhat similar to attaching a BlockBackend to a drained BDS. We already take care to immediately quiesce the BB in this case (even though this isn't very effective because the BB doesn't propagate it correctly to its users yet...) Thoughts? (Paolo, I'm looking at you.) Kevin
On Wed 19 Oct 2016 07:11:20 PM CEST, Kevin Wolf wrote: >> bdrv_drain_all() doesn't allow the caller to do anything after all >> pending requests have been completed but before block jobs are >> resumed. >> >> This patch splits bdrv_drain_all() into _begin() and _end() for that >> purpose. It also adds aio_{disable,enable}_external() calls to >> disable external clients in the meantime. >> >> Signed-off-by: Alberto Garcia <berto@igalia.com> > > This looks okay as a first step, possibly enough for this series > (we'll have to review this carefully), but it leaves us with a rather > limited version of bdrv_drain_all_begin/end that excludes many useful > cases. One of them is that John wants to use it around QMP > transactions. > > Specifically, you can't add a new BDS or a new block job in a > drain_all section because then bdrv_drain_all_end() would try to > unpause the new thing even though it has never been paused. Depending > on what else we did with it, this will either corrupt the pause > counters or just directly fail an assertion. The problem is: do you want to be able to create a new block job and let it run? Because then you can end up having the same problem that this patch is trying to prevent if the new job attempts to reopen a BlockDriverState. Berto
Am 20.10.2016 um 10:25 hat Alberto Garcia geschrieben: > On Wed 19 Oct 2016 07:11:20 PM CEST, Kevin Wolf wrote: > >> bdrv_drain_all() doesn't allow the caller to do anything after all > >> pending requests have been completed but before block jobs are > >> resumed. > >> > >> This patch splits bdrv_drain_all() into _begin() and _end() for that > >> purpose. It also adds aio_{disable,enable}_external() calls to > >> disable external clients in the meantime. > >> > >> Signed-off-by: Alberto Garcia <berto@igalia.com> > > > > This looks okay as a first step, possibly enough for this series > > (we'll have to review this carefully), but it leaves us with a rather > > limited version of bdrv_drain_all_begin/end that excludes many useful > > cases. One of them is that John wants to use it around QMP > > transactions. > > > > Specifically, you can't add a new BDS or a new block job in a > > drain_all section because then bdrv_drain_all_end() would try to > > unpause the new thing even though it has never been paused. Depending > > on what else we did with it, this will either corrupt the pause > > counters or just directly fail an assertion. > > The problem is: do you want to be able to create a new block job and let > it run? Because then you can end up having the same problem that this > patch is trying to prevent if the new job attempts to reopen a > BlockDriverState. No, as I wrote it would have to be automatically paused on creation if it is created in a drained_all section. It would only actually start to run after bdrv_drain_all_end(). Kevin
On 10/20/2016 04:25 AM, Alberto Garcia wrote: > On Wed 19 Oct 2016 07:11:20 PM CEST, Kevin Wolf wrote: > >>> bdrv_drain_all() doesn't allow the caller to do anything after all >>> pending requests have been completed but before block jobs are >>> resumed. >>> >>> This patch splits bdrv_drain_all() into _begin() and _end() for that >>> purpose. It also adds aio_{disable,enable}_external() calls to >>> disable external clients in the meantime. >>> >>> Signed-off-by: Alberto Garcia <berto@igalia.com> >> >> This looks okay as a first step, possibly enough for this series >> (we'll have to review this carefully), but it leaves us with a rather >> limited version of bdrv_drain_all_begin/end that excludes many useful >> cases. One of them is that John wants to use it around QMP >> transactions. >> >> Specifically, you can't add a new BDS or a new block job in a >> drain_all section because then bdrv_drain_all_end() would try to >> unpause the new thing even though it has never been paused. Depending >> on what else we did with it, this will either corrupt the pause >> counters or just directly fail an assertion. > > The problem is: do you want to be able to create a new block job and let > it run? Because then you can end up having the same problem that this > patch is trying to prevent if the new job attempts to reopen a > BlockDriverState. > > Berto > The plan was to create jobs in a pre-started mode and only to engage them after the drained section. Do any jobs re-open a BDS prior to the creation of their coroutine? --js
On Fri 21 Oct 2016 08:55:51 PM CEST, John Snow <jsnow@redhat.com> wrote: >>>> bdrv_drain_all() doesn't allow the caller to do anything after all >>>> pending requests have been completed but before block jobs are >>>> resumed. >>>> >>>> This patch splits bdrv_drain_all() into _begin() and _end() for >>>> that purpose. It also adds aio_{disable,enable}_external() calls to >>>> disable external clients in the meantime. >>>> >>>> Signed-off-by: Alberto Garcia <berto@igalia.com> >>> >>> This looks okay as a first step, possibly enough for this series >>> (we'll have to review this carefully), but it leaves us with a >>> rather limited version of bdrv_drain_all_begin/end that excludes >>> many useful cases. One of them is that John wants to use it around >>> QMP transactions. >>> >>> Specifically, you can't add a new BDS or a new block job in a >>> drain_all section because then bdrv_drain_all_end() would try to >>> unpause the new thing even though it has never been >>> paused. Depending on what else we did with it, this will either >>> corrupt the pause counters or just directly fail an assertion. >> >> The problem is: do you want to be able to create a new block job and >> let it run? Because then you can end up having the same problem that >> this patch is trying to prevent if the new job attempts to reopen a >> BlockDriverState. >> > > The plan was to create jobs in a pre-started mode and only to engage > them after the drained section. Do any jobs re-open a BDS prior to the > creation of their coroutine? Yeah, block-commit for example (see commit_start()), and block-stream after this series. Berto
On 10/21/2016 03:03 PM, Alberto Garcia wrote: > On Fri 21 Oct 2016 08:55:51 PM CEST, John Snow <jsnow@redhat.com> wrote: > >>>>> bdrv_drain_all() doesn't allow the caller to do anything after all >>>>> pending requests have been completed but before block jobs are >>>>> resumed. >>>>> >>>>> This patch splits bdrv_drain_all() into _begin() and _end() for >>>>> that purpose. It also adds aio_{disable,enable}_external() calls to >>>>> disable external clients in the meantime. >>>>> >>>>> Signed-off-by: Alberto Garcia <berto@igalia.com> >>>> >>>> This looks okay as a first step, possibly enough for this series >>>> (we'll have to review this carefully), but it leaves us with a >>>> rather limited version of bdrv_drain_all_begin/end that excludes >>>> many useful cases. One of them is that John wants to use it around >>>> QMP transactions. >>>> >>>> Specifically, you can't add a new BDS or a new block job in a >>>> drain_all section because then bdrv_drain_all_end() would try to >>>> unpause the new thing even though it has never been >>>> paused. Depending on what else we did with it, this will either >>>> corrupt the pause counters or just directly fail an assertion. >>> >>> The problem is: do you want to be able to create a new block job and >>> let it run? Because then you can end up having the same problem that >>> this patch is trying to prevent if the new job attempts to reopen a >>> BlockDriverState. >>> >> >> The plan was to create jobs in a pre-started mode and only to engage >> them after the drained section. Do any jobs re-open a BDS prior to the >> creation of their coroutine? > > Yeah, block-commit for example (see commit_start()), and block-stream > after this series. > > Berto > Ah, that is a problem for that use case then, but no matter. I think I've worked out with Kevin the other day (Kevin hit me with a rather large trout) that a drained_all shouldn't really be necessary for qmp_transactions so long as each action is diligent in using bdrv_drained_begin/end for any given BDS that is relevant to it. I was worried at one point about this flow: 1) bdrv_drained_begin(0x01), do_stuff() 2) bdrv_drained_begin(0x02), do_stuff() [...] And thought I might need to rework it as: bdrv_drained_all_begin() 1) do_stuff() 2) do_stuff() bdrv_drained_all_end() but Kevin has pointed out that even though actions and drains are interspersed, the point-in-time simply becomes the time at last drain and it still should be coherent, so I won't need the drain-all after all. --js
On 19/10/2016 19:11, Kevin Wolf wrote: > Am 14.10.2016 um 15:08 hat Alberto Garcia geschrieben: >> bdrv_drain_all() doesn't allow the caller to do anything after all >> pending requests have been completed but before block jobs are >> resumed. >> >> This patch splits bdrv_drain_all() into _begin() and _end() for that >> purpose. It also adds aio_{disable,enable}_external() calls to disable >> external clients in the meantime. >> >> Signed-off-by: Alberto Garcia <berto@igalia.com> > > This looks okay as a first step, possibly enough for this series (we'll > have to review this carefully), but it leaves us with a rather limited > version of bdrv_drain_all_begin/end that excludes many useful cases. One > of them is that John wants to use it around QMP transactions. > > Specifically, you can't add a new BDS or a new block job in a drain_all > section because then bdrv_drain_all_end() would try to unpause the new > thing even though it has never been paused. Depending on what else we > did with it, this will either corrupt the pause counters or just > directly fail an assertion. > > My first thoughts were about how to let an unpause succeed without a > previous pause for these objects, but actually I think this isn't what > we should do. We rather want to actually do the pause instead because > even new BDSes and block jobs should probably start in a quiesced state > when created inside a drain_all section. Yes, I agree with this. It shouldn't be too hard to implement it. It would require a BlockDriverState to look at the global "inside bdrv_drain_all_begin" state, and ask its BlockBackend to disable itself upon bdrv_replace_child. Basically, "foo->quiesce_counter" should become "foo->quiesce_counter || all_quiesce_counter", I think. It may well be done as a separate patch if there is a TODO comment in bdrv_replace_child; as Kevin said, there are assertions to protect against misuse. Paolo
On Mon 24 Oct 2016 12:53:41 PM CEST, Paolo Bonzini wrote: >> My first thoughts were about how to let an unpause succeed without a >> previous pause for these objects, but actually I think this isn't >> what we should do. We rather want to actually do the pause instead >> because even new BDSes and block jobs should probably start in a >> quiesced state when created inside a drain_all section. > > Yes, I agree with this. It shouldn't be too hard to implement it. It > would require a BlockDriverState to look at the global "inside > bdrv_drain_all_begin" state, and ask its BlockBackend to disable > itself upon bdrv_replace_child. Why in bdrv_replace_child()? bdrv_drain_all_end() enables all BDSs, but if you add one with "blockdev-add" it's not going to be disabled using this method. In addition to that block jobs need the same, don't they? Something like "job->pause_count = all_quiesce_counter" in the initialization. I think we'd also need to add block_job_pause_point() at the beginning of each one of their coroutines, in order to make sure that they really start paused. Berto
On 25/10/2016 15:39, Alberto Garcia wrote: > On Mon 24 Oct 2016 12:53:41 PM CEST, Paolo Bonzini wrote: > >>> My first thoughts were about how to let an unpause succeed without a >>> previous pause for these objects, but actually I think this isn't >>> what we should do. We rather want to actually do the pause instead >>> because even new BDSes and block jobs should probably start in a >>> quiesced state when created inside a drain_all section. >> >> Yes, I agree with this. It shouldn't be too hard to implement it. It >> would require a BlockDriverState to look at the global "inside >> bdrv_drain_all_begin" state, and ask its BlockBackend to disable >> itself upon bdrv_replace_child. > > Why in bdrv_replace_child()? bdrv_drain_all_end() enables all BDSs, but > if you add one with "blockdev-add" it's not going to be disabled using > this method. You only need to disable it when blk_insert_bs is called. In fact... > In addition to that block jobs need the same, don't they? Something like > "job->pause_count = all_quiesce_counter" in the initialization. ... we discussed a couple weeks ago that block jobs should register BlkDeviceOps that pause the job in the drained_begin callback. So when the block job calls blk_insert_bs, the drained_begin callback would start the job as paused. > I think we'd also need to add block_job_pause_point() at the beginning > of each one of their coroutines, in order to make sure that they really > start paused. It depends on the job, for example streaming starts with block_job_sleep_ns. Commit also does, except for some blk_getlength and blk_truncate calls that could be moved to the caller. Paolo
Am 25.10.2016 um 15:53 hat Paolo Bonzini geschrieben: > > > On 25/10/2016 15:39, Alberto Garcia wrote: > > On Mon 24 Oct 2016 12:53:41 PM CEST, Paolo Bonzini wrote: > > > >>> My first thoughts were about how to let an unpause succeed without a > >>> previous pause for these objects, but actually I think this isn't > >>> what we should do. We rather want to actually do the pause instead > >>> because even new BDSes and block jobs should probably start in a > >>> quiesced state when created inside a drain_all section. > >> > >> Yes, I agree with this. It shouldn't be too hard to implement it. It > >> would require a BlockDriverState to look at the global "inside > >> bdrv_drain_all_begin" state, and ask its BlockBackend to disable > >> itself upon bdrv_replace_child. > > > > Why in bdrv_replace_child()? bdrv_drain_all_end() enables all BDSs, but > > if you add one with "blockdev-add" it's not going to be disabled using > > this method. > > You only need to disable it when blk_insert_bs is called. In fact... This assumes that the block driver doesn't issue internal background I/O by itself. Probably true for everything that we have today, but it would probably be cleaner to quiesce it directly in bdrv_open_common(). > > In addition to that block jobs need the same, don't they? Something like > > "job->pause_count = all_quiesce_counter" in the initialization. > > ... we discussed a couple weeks ago that block jobs should register > BlkDeviceOps that pause the job in the drained_begin callback. So when > the block job calls blk_insert_bs, the drained_begin callback would > start the job as paused. Yes, should, but doing this kind of infrastructure work isn't something for this series. > > I think we'd also need to add block_job_pause_point() at the beginning > > of each one of their coroutines, in order to make sure that they really > > start paused. > > It depends on the job, for example streaming starts with > block_job_sleep_ns. Commit also does, except for some blk_getlength and > blk_truncate calls that could be moved to the caller. Kevin
On 25/10/2016 16:38, Kevin Wolf wrote: > Am 25.10.2016 um 15:53 hat Paolo Bonzini geschrieben: >> >> >> On 25/10/2016 15:39, Alberto Garcia wrote: >>> On Mon 24 Oct 2016 12:53:41 PM CEST, Paolo Bonzini wrote: >>> >>>>> My first thoughts were about how to let an unpause succeed without a >>>>> previous pause for these objects, but actually I think this isn't >>>>> what we should do. We rather want to actually do the pause instead >>>>> because even new BDSes and block jobs should probably start in a >>>>> quiesced state when created inside a drain_all section. >>>> >>>> Yes, I agree with this. It shouldn't be too hard to implement it. It >>>> would require a BlockDriverState to look at the global "inside >>>> bdrv_drain_all_begin" state, and ask its BlockBackend to disable >>>> itself upon bdrv_replace_child. >>> >>> Why in bdrv_replace_child()? bdrv_drain_all_end() enables all BDSs, but >>> if you add one with "blockdev-add" it's not going to be disabled using >>> this method. >> >> You only need to disable it when blk_insert_bs is called. In fact... > > This assumes that the block driver doesn't issue internal background I/O > by itself. Probably true for everything that we have today, but it would > probably be cleaner to quiesce it directly in bdrv_open_common(). So bs->quiesce_counter = all_quiesce_counter; ? That would work. Should bdrv_close assert bs->quiesce_counter==0 (which implies all_quiesce_counter == 0), since it usually has to do I/O? >>> In addition to that block jobs need the same, don't they? Something like >>> "job->pause_count = all_quiesce_counter" in the initialization. >> >> ... we discussed a couple weeks ago that block jobs should register >> BlkDeviceOps that pause the job in the drained_begin callback. So when >> the block job calls blk_insert_bs, the drained_begin callback would >> start the job as paused. > > Yes, should, but doing this kind of infrastructure work isn't something > for this series. I agree. I was just explaining why it wouldn't be necessary to initialize job->pause_count. Paolo >>> I think we'd also need to add block_job_pause_point() at the beginning >>> of each one of their coroutines, in order to make sure that they really >>> start paused. >> >> It depends on the job, for example streaming starts with >> block_job_sleep_ns. Commit also does, except for some blk_getlength and >> blk_truncate calls that could be moved to the caller. > > Kevin >
On Tue 25 Oct 2016 04:38:27 PM CEST, Kevin Wolf wrote: >> >>> My first thoughts were about how to let an unpause succeed without a >> >>> previous pause for these objects, but actually I think this isn't >> >>> what we should do. We rather want to actually do the pause instead >> >>> because even new BDSes and block jobs should probably start in a >> >>> quiesced state when created inside a drain_all section. >> >> >> >> Yes, I agree with this. It shouldn't be too hard to implement it. It >> >> would require a BlockDriverState to look at the global "inside >> >> bdrv_drain_all_begin" state, and ask its BlockBackend to disable >> >> itself upon bdrv_replace_child. >> > >> > Why in bdrv_replace_child()? bdrv_drain_all_end() enables all BDSs, but >> > if you add one with "blockdev-add" it's not going to be disabled using >> > this method. >> >> You only need to disable it when blk_insert_bs is called. In fact... > > This assumes that the block driver doesn't issue internal background I/O > by itself. Probably true for everything that we have today, but it would > probably be cleaner to quiesce it directly in bdrv_open_common(). And how about the rest of the things that are going on in bdrv_drain_all_begin()? bdrv_parent_drained_begin(bs); bdrv_io_unplugged_begin(bs); bdrv_drain_recurse(bs); aio_disable_external(aio_context); Berto
On 25/10/2016 16:48, Alberto Garcia wrote: > And how about the rest of the things that are going on in > bdrv_drain_all_begin()? > > bdrv_parent_drained_begin(bs); No BlockBackend yet, and BlockDriverStates have been quiesced already, so that's okay. > bdrv_io_unplugged_begin(bs); No I/O yet, so that's okay. > bdrv_drain_recurse(bs); Children have been created before, so they're already quiescent. > aio_disable_external(aio_context); This is also a hack for what should be in BlockBackend---which means that we're safe because there's no BlockBackend yet. Paolo
Am 25.10.2016 um 16:41 hat Paolo Bonzini geschrieben: > > > On 25/10/2016 16:38, Kevin Wolf wrote: > > Am 25.10.2016 um 15:53 hat Paolo Bonzini geschrieben: > >> > >> > >> On 25/10/2016 15:39, Alberto Garcia wrote: > >>> On Mon 24 Oct 2016 12:53:41 PM CEST, Paolo Bonzini wrote: > >>> > >>>>> My first thoughts were about how to let an unpause succeed without a > >>>>> previous pause for these objects, but actually I think this isn't > >>>>> what we should do. We rather want to actually do the pause instead > >>>>> because even new BDSes and block jobs should probably start in a > >>>>> quiesced state when created inside a drain_all section. > >>>> > >>>> Yes, I agree with this. It shouldn't be too hard to implement it. It > >>>> would require a BlockDriverState to look at the global "inside > >>>> bdrv_drain_all_begin" state, and ask its BlockBackend to disable > >>>> itself upon bdrv_replace_child. > >>> > >>> Why in bdrv_replace_child()? bdrv_drain_all_end() enables all BDSs, but > >>> if you add one with "blockdev-add" it's not going to be disabled using > >>> this method. > >> > >> You only need to disable it when blk_insert_bs is called. In fact... > > > > This assumes that the block driver doesn't issue internal background I/O > > by itself. Probably true for everything that we have today, but it would > > probably be cleaner to quiesce it directly in bdrv_open_common(). > > So > > bs->quiesce_counter = all_quiesce_counter; > > ? That would work. Yes, that's what I had in mind. > Should bdrv_close assert bs->quiesce_counter==0 > (which implies all_quiesce_counter == 0), since it usually has to do I/O? Hm... Not sure about that. We're still using bdrv_drain_all_begin/end as a function to isolate the BDSes, so some I/O from the caller of drain_all is expected, and that could involve deleting a BDS. But once we fully implemented what you proposed, probably yes. Kevin
diff --git a/block/io.c b/block/io.c index b136c89..9418f8b 100644 --- a/block/io.c +++ b/block/io.c @@ -275,8 +275,11 @@ void bdrv_drain(BlockDriverState *bs) * * This function does not flush data to disk, use bdrv_flush_all() for that * after calling this function. + * + * This pauses all block jobs and disables external clients. It must + * be paired with bdrv_drain_all_end(). */ -void bdrv_drain_all(void) +void bdrv_drain_all_begin(void) { /* Always run first iteration so any pending completion BHs run */ bool busy = true; @@ -300,6 +303,7 @@ void bdrv_drain_all(void) bdrv_parent_drained_begin(bs); bdrv_io_unplugged_begin(bs); bdrv_drain_recurse(bs); + aio_disable_external(aio_context); aio_context_release(aio_context); if (!g_slist_find(aio_ctxs, aio_context)) { @@ -333,17 +337,25 @@ void bdrv_drain_all(void) } } + g_slist_free(aio_ctxs); +} + +void bdrv_drain_all_end(void) +{ + BlockDriverState *bs; + BdrvNextIterator it; + BlockJob *job = NULL; + for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { AioContext *aio_context = bdrv_get_aio_context(bs); aio_context_acquire(aio_context); + aio_enable_external(aio_context); bdrv_io_unplugged_end(bs); bdrv_parent_drained_end(bs); aio_context_release(aio_context); } - g_slist_free(aio_ctxs); - job = NULL; while ((job = block_job_next(job))) { AioContext *aio_context = blk_get_aio_context(job->blk); @@ -353,6 +365,12 @@ void bdrv_drain_all(void) } } +void bdrv_drain_all(void) +{ + bdrv_drain_all_begin(); + bdrv_drain_all_end(); +} + /** * Remove an active request from the tracked requests list * diff --git a/include/block/block.h b/include/block/block.h index 107c603..301d713 100644 --- a/include/block/block.h +++ b/include/block/block.h @@ -338,6 +338,8 @@ int bdrv_flush_all(void); void bdrv_close_all(void); void bdrv_drain(BlockDriverState *bs); void coroutine_fn bdrv_co_drain(BlockDriverState *bs); +void bdrv_drain_all_begin(void); +void bdrv_drain_all_end(void); void bdrv_drain_all(void); int bdrv_pdiscard(BlockDriverState *bs, int64_t offset, int count);
bdrv_drain_all() doesn't allow the caller to do anything after all pending requests have been completed but before block jobs are resumed. This patch splits bdrv_drain_all() into _begin() and _end() for that purpose. It also adds aio_{disable,enable}_external() calls to disable external clients in the meantime. Signed-off-by: Alberto Garcia <berto@igalia.com> --- block/io.c | 24 +++++++++++++++++++++--- include/block/block.h | 2 ++ 2 files changed, 23 insertions(+), 3 deletions(-)