diff mbox

[v12,02/19] block: Pause all jobs during bdrv_reopen_multiple()

Message ID 6cf9c4b12d9d95692664efa731d1f56c01dc1581.1477476553.git.berto@igalia.com (mailing list archive)
State New, archived
Headers show

Commit Message

Alberto Garcia Oct. 26, 2016, 10:29 a.m. UTC
When a BlockDriverState is about to be reopened it can trigger certain
operations that need to write to disk. During this process a different
block job can be woken up. If that block job completes and also needs
to call bdrv_reopen() it can happen that it needs to do it on the same
BlockDriverState that is still in the process of being reopened.

This can have fatal consequences, like in this example:

  1) Block job A starts and sleeps after a while.
  2) Block job B starts and tries to reopen node1 (a qcow2 file).
  3) Reopening node1 means flushing and replacing its qcow2 cache.
  4) While the qcow2 cache is being flushed, job A wakes up.
  5) Job A completes and reopens node1, replacing its cache.
  6) Job B resumes, but the cache that was being flushed no longer
     exists.

This patch splits the bdrv_drain_all() call to keep all block jobs
paused during bdrv_reopen_multiple(), so that step 4 can never happen
and the operation is safe.

Note that this scenario can only happen if both bdrv_reopen() calls
are made by block jobs on the same backing chain. Otherwise there's no
chance that the same BlockDriverState appears in both reopen queues.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 block.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Kevin Wolf Oct. 26, 2016, 2:36 p.m. UTC | #1
Am 26.10.2016 um 12:29 hat Alberto Garcia geschrieben:
> When a BlockDriverState is about to be reopened it can trigger certain
> operations that need to write to disk. During this process a different
> block job can be woken up. If that block job completes and also needs
> to call bdrv_reopen() it can happen that it needs to do it on the same
> BlockDriverState that is still in the process of being reopened.
> 
> This can have fatal consequences, like in this example:
> 
>   1) Block job A starts and sleeps after a while.
>   2) Block job B starts and tries to reopen node1 (a qcow2 file).
>   3) Reopening node1 means flushing and replacing its qcow2 cache.
>   4) While the qcow2 cache is being flushed, job A wakes up.
>   5) Job A completes and reopens node1, replacing its cache.
>   6) Job B resumes, but the cache that was being flushed no longer
>      exists.
> 
> This patch splits the bdrv_drain_all() call to keep all block jobs
> paused during bdrv_reopen_multiple(), so that step 4 can never happen
> and the operation is safe.
> 
> Note that this scenario can only happen if both bdrv_reopen() calls
> are made by block jobs on the same backing chain. Otherwise there's no
> chance that the same BlockDriverState appears in both reopen queues.
> 
> Signed-off-by: Alberto Garcia <berto@igalia.com>
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

Reviewed-by: Kevin Wolf <kwolf@redhat.com>
diff mbox

Patch

diff --git a/block.c b/block.c
index 7f3e7bc..adbecd0 100644
--- a/block.c
+++ b/block.c
@@ -2090,7 +2090,7 @@  int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, Error **errp)
 
     assert(bs_queue != NULL);
 
-    bdrv_drain_all();
+    bdrv_drain_all_begin();
 
     QSIMPLEQ_FOREACH(bs_entry, bs_queue, entry) {
         if (bdrv_reopen_prepare(&bs_entry->state, bs_queue, &local_err)) {
@@ -2120,6 +2120,9 @@  cleanup:
         g_free(bs_entry);
     }
     g_free(bs_queue);
+
+    bdrv_drain_all_end();
+
     return ret;
 }