diff mbox series

[for-7.2,3/5] block/mirror: Fix NULL s->job in active writes

Message ID 20221109165452.67927-4-hreitz@redhat.com (mailing list archive)
State New, archived
Headers show
Series block/mirror: Do not wait for active writes | expand

Commit Message

Hanna Czenczek Nov. 9, 2022, 4:54 p.m. UTC
There is a small gap in mirror_start_job() before putting the mirror
filter node into the block graph (bdrv_append() call) and the actual job
being created.  Before the job is created, MirrorBDSOpaque.job is NULL.

It is possible that requests come in when bdrv_drained_end() is called,
and those requests would see MirrorBDSOpaque.job == NULL.  Have our
filter node handle that case gracefully.

Signed-off-by: Hanna Reitz <hreitz@redhat.com>
---
 block/mirror.c | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

Comments

Kevin Wolf Nov. 10, 2022, 12:10 p.m. UTC | #1
Am 09.11.2022 um 17:54 hat Hanna Reitz geschrieben:
> There is a small gap in mirror_start_job() before putting the mirror
> filter node into the block graph (bdrv_append() call) and the actual job
> being created.  Before the job is created, MirrorBDSOpaque.job is NULL.
> 
> It is possible that requests come in when bdrv_drained_end() is called,
> and those requests would see MirrorBDSOpaque.job == NULL.  Have our
> filter node handle that case gracefully.
> 
> Signed-off-by: Hanna Reitz <hreitz@redhat.com>

This can only happen because bdrv_drained_end() polls, right? So after
changing that it won't be necessary any more, but this series is for 7.2
and the drain one isn't, so this is the right thing to do for now.

Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Hanna Czenczek Nov. 10, 2022, 12:40 p.m. UTC | #2
On 10.11.22 13:10, Kevin Wolf wrote:
> Am 09.11.2022 um 17:54 hat Hanna Reitz geschrieben:
>> There is a small gap in mirror_start_job() before putting the mirror
>> filter node into the block graph (bdrv_append() call) and the actual job
>> being created.  Before the job is created, MirrorBDSOpaque.job is NULL.
>>
>> It is possible that requests come in when bdrv_drained_end() is called,
>> and those requests would see MirrorBDSOpaque.job == NULL.  Have our
>> filter node handle that case gracefully.
>>
>> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
> This can only happen because bdrv_drained_end() polls, right? So after
> changing that it won't be necessary any more, but this series is for 7.2
> and the drain one isn't, so this is the right thing to do for now.

I was wondering the same, but I haven’t tested it yet.

Hanna
diff mbox series

Patch

diff --git a/block/mirror.c b/block/mirror.c
index 5b6f42392c..251adc5ae0 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1438,11 +1438,13 @@  static int coroutine_fn bdrv_mirror_top_do_write(BlockDriverState *bs,
     MirrorOp *op = NULL;
     MirrorBDSOpaque *s = bs->opaque;
     int ret = 0;
-    bool copy_to_target;
+    bool copy_to_target = false;
 
-    copy_to_target = s->job->ret >= 0 &&
-                     !job_is_cancelled(&s->job->common.job) &&
-                     s->job->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING;
+    if (s->job) {
+        copy_to_target = s->job->ret >= 0 &&
+                         !job_is_cancelled(&s->job->common.job) &&
+                         s->job->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING;
+    }
 
     if (copy_to_target) {
         op = active_write_prepare(s->job, offset, bytes);
@@ -1487,11 +1489,13 @@  static int coroutine_fn bdrv_mirror_top_pwritev(BlockDriverState *bs,
     QEMUIOVector bounce_qiov;
     void *bounce_buf;
     int ret = 0;
-    bool copy_to_target;
+    bool copy_to_target = false;
 
-    copy_to_target = s->job->ret >= 0 &&
-                     !job_is_cancelled(&s->job->common.job) &&
-                     s->job->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING;
+    if (s->job) {
+        copy_to_target = s->job->ret >= 0 &&
+                         !job_is_cancelled(&s->job->common.job) &&
+                         s->job->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING;
+    }
 
     if (copy_to_target) {
         /* The guest might concurrently modify the data to write; but