Message ID | 20220128155130.13326-1-hreitz@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [RFC] block/nbd: Move s->ioc on AioContext change | expand |
On Fri, Jan 28, 2022 at 04:51:30PM +0100, Hanna Reitz wrote: > s->ioc must always be attached to the NBD node's AioContext. If that > context changes, s->ioc must be attached to the new context. Eww. Good catch; and looks like it is not the first time where we've run into issues (a quick grep for context through git history of nbd/ finds commit e6cada923). > > Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1990835 > Signed-off-by: Hanna Reitz <hreitz@redhat.com> > --- > This is an RFC because I believe there are some other things in the NBD > block driver that need attention on an AioContext change, too. Namely, > there are two timers (reconnect_delay_timer and open_timer) that are As Vladimir is more familiar with the timers needed for reconnecting an NBD connection, I'm hoping he will chime in. But yes, your worry about context changes sounds reasonable. > also attached to the node's AioContext, and I'm afraid they need to be > handled, too. Probably pause them on detach, and resume them on attach, > but I'm not sure, which is why I'm posting this as an RFC to get some > comments from that from someone who knows this code better than me. :) > > (Also, in a real v1, of course I'd want to add a regression test.) > --- > block/nbd.c | 28 ++++++++++++++++++++++++++++ > 1 file changed, 28 insertions(+) > > diff --git a/block/nbd.c b/block/nbd.c > index 63dbfa807d..119a774c04 100644 > --- a/block/nbd.c > +++ b/block/nbd.c > @@ -2036,6 +2036,25 @@ static void nbd_cancel_in_flight(BlockDriverState *bs) > nbd_co_establish_connection_cancel(s->conn); > } > > +static void nbd_attach_aio_context(BlockDriverState *bs, > + AioContext *new_context) > +{ > + BDRVNBDState *s = bs->opaque; > + > + if (s->ioc) { > + qio_channel_attach_aio_context(s->ioc, new_context); > + } > +} > + > +static void nbd_detach_aio_context(BlockDriverState *bs) > +{ > + BDRVNBDState *s = bs->opaque; > + > + if (s->ioc) { > + qio_channel_detach_aio_context(s->ioc); > + } > +} > + > static BlockDriver bdrv_nbd = { > .format_name = "nbd", > .protocol_name = "nbd", > @@ -2059,6 +2078,9 @@ static BlockDriver bdrv_nbd = { > .bdrv_dirname = nbd_dirname, > .strong_runtime_opts = nbd_strong_runtime_opts, > .bdrv_cancel_in_flight = nbd_cancel_in_flight, > + > + .bdrv_attach_aio_context = nbd_attach_aio_context, > + .bdrv_detach_aio_context = nbd_detach_aio_context, Looks straightforward, but as you say, fleshing out what else may need similar treatment could make the "real" v1 more interesting.
28.01.2022 18:51, Hanna Reitz wrote: > s->ioc must always be attached to the NBD node's AioContext. If that > context changes, s->ioc must be attached to the new context. > > Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1990835 > Signed-off-by: Hanna Reitz <hreitz@redhat.com> > --- > This is an RFC because I believe there are some other things in the NBD > block driver that need attention on an AioContext change, too. Namely, > there are two timers (reconnect_delay_timer and open_timer) that are > also attached to the node's AioContext, and I'm afraid they need to be > handled, too. Probably pause them on detach, and resume them on attach, > but I'm not sure, which is why I'm posting this as an RFC to get some > comments from that from someone who knows this code better than me. :) > > (Also, in a real v1, of course I'd want to add a regression test.) > --- > block/nbd.c | 28 ++++++++++++++++++++++++++++ > 1 file changed, 28 insertions(+) > > diff --git a/block/nbd.c b/block/nbd.c > index 63dbfa807d..119a774c04 100644 > --- a/block/nbd.c > +++ b/block/nbd.c > @@ -2036,6 +2036,25 @@ static void nbd_cancel_in_flight(BlockDriverState *bs) > nbd_co_establish_connection_cancel(s->conn); > } > > +static void nbd_attach_aio_context(BlockDriverState *bs, > + AioContext *new_context) > +{ > + BDRVNBDState *s = bs->opaque; > + > + if (s->ioc) { > + qio_channel_attach_aio_context(s->ioc, new_context); > + } > +} > + > +static void nbd_detach_aio_context(BlockDriverState *bs) > +{ > + BDRVNBDState *s = bs->opaque; > + > + if (s->ioc) { > + qio_channel_detach_aio_context(s->ioc); > + } > +} > + > static BlockDriver bdrv_nbd = { > .format_name = "nbd", > .protocol_name = "nbd", > @@ -2059,6 +2078,9 @@ static BlockDriver bdrv_nbd = { > .bdrv_dirname = nbd_dirname, > .strong_runtime_opts = nbd_strong_runtime_opts, > .bdrv_cancel_in_flight = nbd_cancel_in_flight, > + > + .bdrv_attach_aio_context = nbd_attach_aio_context, > + .bdrv_detach_aio_context = nbd_detach_aio_context, > }; > > static BlockDriver bdrv_nbd_tcp = { > @@ -2084,6 +2106,9 @@ static BlockDriver bdrv_nbd_tcp = { > .bdrv_dirname = nbd_dirname, > .strong_runtime_opts = nbd_strong_runtime_opts, > .bdrv_cancel_in_flight = nbd_cancel_in_flight, > + > + .bdrv_attach_aio_context = nbd_attach_aio_context, > + .bdrv_detach_aio_context = nbd_detach_aio_context, > }; > > static BlockDriver bdrv_nbd_unix = { > @@ -2109,6 +2134,9 @@ static BlockDriver bdrv_nbd_unix = { > .bdrv_dirname = nbd_dirname, > .strong_runtime_opts = nbd_strong_runtime_opts, > .bdrv_cancel_in_flight = nbd_cancel_in_flight, > + > + .bdrv_attach_aio_context = nbd_attach_aio_context, > + .bdrv_detach_aio_context = nbd_detach_aio_context, > }; > > static void bdrv_nbd_init(void) > Hmm. I was so happy to remove these handlers together with connection-coroutine :) . But you are right, seems I've removed too much :(. open_timer exists only during bdrv_open() handler, so, I hope on attach/detach it should not exist. reconnect_delay_timer should exist only during IO request: it's created during request if we don't have a connection. And request will not finish until timer elapsed or connection established (timer should be removed in this case too). So, again, when attaching / detaching the context we should be in a drained sections, so no in-flight requests and no reconnect_delay_timer. So, I think assertions that both timer pointers are NULL should be enough in attach / detach handlers.
On 01.02.22 12:18, Vladimir Sementsov-Ogievskiy wrote: > 28.01.2022 18:51, Hanna Reitz wrote: >> s->ioc must always be attached to the NBD node's AioContext. If that >> context changes, s->ioc must be attached to the new context. >> >> Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1990835 >> Signed-off-by: Hanna Reitz <hreitz@redhat.com> >> --- >> This is an RFC because I believe there are some other things in the NBD >> block driver that need attention on an AioContext change, too. Namely, >> there are two timers (reconnect_delay_timer and open_timer) that are >> also attached to the node's AioContext, and I'm afraid they need to be >> handled, too. Probably pause them on detach, and resume them on attach, >> but I'm not sure, which is why I'm posting this as an RFC to get some >> comments from that from someone who knows this code better than me. :) >> >> (Also, in a real v1, of course I'd want to add a regression test.) >> --- >> block/nbd.c | 28 ++++++++++++++++++++++++++++ >> 1 file changed, 28 insertions(+) >> >> diff --git a/block/nbd.c b/block/nbd.c >> index 63dbfa807d..119a774c04 100644 >> --- a/block/nbd.c >> +++ b/block/nbd.c >> @@ -2036,6 +2036,25 @@ static void >> nbd_cancel_in_flight(BlockDriverState *bs) >> nbd_co_establish_connection_cancel(s->conn); >> } >> +static void nbd_attach_aio_context(BlockDriverState *bs, >> + AioContext *new_context) >> +{ >> + BDRVNBDState *s = bs->opaque; >> + >> + if (s->ioc) { >> + qio_channel_attach_aio_context(s->ioc, new_context); >> + } >> +} >> + >> +static void nbd_detach_aio_context(BlockDriverState *bs) >> +{ >> + BDRVNBDState *s = bs->opaque; >> + >> + if (s->ioc) { >> + qio_channel_detach_aio_context(s->ioc); >> + } >> +} >> + >> static BlockDriver bdrv_nbd = { >> .format_name = "nbd", >> .protocol_name = "nbd", >> @@ -2059,6 +2078,9 @@ static BlockDriver bdrv_nbd = { >> .bdrv_dirname = nbd_dirname, >> .strong_runtime_opts = nbd_strong_runtime_opts, >> .bdrv_cancel_in_flight = nbd_cancel_in_flight, >> + >> + .bdrv_attach_aio_context = nbd_attach_aio_context, >> + .bdrv_detach_aio_context = nbd_detach_aio_context, >> }; >> static BlockDriver bdrv_nbd_tcp = { >> @@ -2084,6 +2106,9 @@ static BlockDriver bdrv_nbd_tcp = { >> .bdrv_dirname = nbd_dirname, >> .strong_runtime_opts = nbd_strong_runtime_opts, >> .bdrv_cancel_in_flight = nbd_cancel_in_flight, >> + >> + .bdrv_attach_aio_context = nbd_attach_aio_context, >> + .bdrv_detach_aio_context = nbd_detach_aio_context, >> }; >> static BlockDriver bdrv_nbd_unix = { >> @@ -2109,6 +2134,9 @@ static BlockDriver bdrv_nbd_unix = { >> .bdrv_dirname = nbd_dirname, >> .strong_runtime_opts = nbd_strong_runtime_opts, >> .bdrv_cancel_in_flight = nbd_cancel_in_flight, >> + >> + .bdrv_attach_aio_context = nbd_attach_aio_context, >> + .bdrv_detach_aio_context = nbd_detach_aio_context, >> }; >> static void bdrv_nbd_init(void) >> > > > Hmm. I was so happy to remove these handlers together with > connection-coroutine :) . But you are right, seems I've removed too > much :(. > > > open_timer exists only during bdrv_open() handler, so, I hope on > attach/detach it should not exist. That’s… kind of surprising. It’s good for me here, but as far as I can see it means that all of qemu blocks until the connection succeeds, right? That doesn’t seem quite ideal... Anyway, good for me. O:) > reconnect_delay_timer should exist only during IO request: it's > created during request if we don't have a connection. And request will > not finish until timer elapsed or connection established (timer should > be removed in this case too). So, again, when attaching / detaching > the context we should be in a drained sections, so no in-flight > requests and no reconnect_delay_timer. Got it. FWIW, other block drivers rely on this, too (e.g. null-aio with latency-ns set creates a timer in every I/O request and settles the request once the timer expires). > > So, I think assertions that both timer pointers are NULL should be > enough in attach / detach handlers. > Great! I’ll cook up v1.
01.02.2022 14:40, Hanna Reitz wrote: > On 01.02.22 12:18, Vladimir Sementsov-Ogievskiy wrote: >> 28.01.2022 18:51, Hanna Reitz wrote: >>> s->ioc must always be attached to the NBD node's AioContext. If that >>> context changes, s->ioc must be attached to the new context. >>> >>> Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1990835 >>> Signed-off-by: Hanna Reitz <hreitz@redhat.com> >>> --- >>> This is an RFC because I believe there are some other things in the NBD >>> block driver that need attention on an AioContext change, too. Namely, >>> there are two timers (reconnect_delay_timer and open_timer) that are >>> also attached to the node's AioContext, and I'm afraid they need to be >>> handled, too. Probably pause them on detach, and resume them on attach, >>> but I'm not sure, which is why I'm posting this as an RFC to get some >>> comments from that from someone who knows this code better than me. :) >>> >>> (Also, in a real v1, of course I'd want to add a regression test.) >>> --- >>> block/nbd.c | 28 ++++++++++++++++++++++++++++ >>> 1 file changed, 28 insertions(+) >>> >>> diff --git a/block/nbd.c b/block/nbd.c >>> index 63dbfa807d..119a774c04 100644 >>> --- a/block/nbd.c >>> +++ b/block/nbd.c >>> @@ -2036,6 +2036,25 @@ static void nbd_cancel_in_flight(BlockDriverState *bs) >>> nbd_co_establish_connection_cancel(s->conn); >>> } >>> +static void nbd_attach_aio_context(BlockDriverState *bs, >>> + AioContext *new_context) >>> +{ >>> + BDRVNBDState *s = bs->opaque; >>> + >>> + if (s->ioc) { >>> + qio_channel_attach_aio_context(s->ioc, new_context); >>> + } >>> +} >>> + >>> +static void nbd_detach_aio_context(BlockDriverState *bs) >>> +{ >>> + BDRVNBDState *s = bs->opaque; >>> + >>> + if (s->ioc) { >>> + qio_channel_detach_aio_context(s->ioc); >>> + } >>> +} >>> + >>> static BlockDriver bdrv_nbd = { >>> .format_name = "nbd", >>> .protocol_name = "nbd", >>> @@ -2059,6 +2078,9 @@ static BlockDriver bdrv_nbd = { >>> .bdrv_dirname = nbd_dirname, >>> .strong_runtime_opts = nbd_strong_runtime_opts, >>> .bdrv_cancel_in_flight = nbd_cancel_in_flight, >>> + >>> + .bdrv_attach_aio_context = nbd_attach_aio_context, >>> + .bdrv_detach_aio_context = nbd_detach_aio_context, >>> }; >>> static BlockDriver bdrv_nbd_tcp = { >>> @@ -2084,6 +2106,9 @@ static BlockDriver bdrv_nbd_tcp = { >>> .bdrv_dirname = nbd_dirname, >>> .strong_runtime_opts = nbd_strong_runtime_opts, >>> .bdrv_cancel_in_flight = nbd_cancel_in_flight, >>> + >>> + .bdrv_attach_aio_context = nbd_attach_aio_context, >>> + .bdrv_detach_aio_context = nbd_detach_aio_context, >>> }; >>> static BlockDriver bdrv_nbd_unix = { >>> @@ -2109,6 +2134,9 @@ static BlockDriver bdrv_nbd_unix = { >>> .bdrv_dirname = nbd_dirname, >>> .strong_runtime_opts = nbd_strong_runtime_opts, >>> .bdrv_cancel_in_flight = nbd_cancel_in_flight, >>> + >>> + .bdrv_attach_aio_context = nbd_attach_aio_context, >>> + .bdrv_detach_aio_context = nbd_detach_aio_context, >>> }; >>> static void bdrv_nbd_init(void) >>> >> >> >> Hmm. I was so happy to remove these handlers together with connection-coroutine :) . But you are right, seems I've removed too much :(. >> >> >> open_timer exists only during bdrv_open() handler, so, I hope on attach/detach it should not exist. > > That’s… kind of surprising. It’s good for me here, but as far as I can see it means that all of qemu blocks until the connection succeeds, right? That doesn’t seem quite ideal... Right. Still the intended usage was for command-line, so we can wait for connection on Qemu start. Using it in blockdev-add when vm is running is doubt-able. In v3 I had a patch to make blockdev-add a coroutine qmp command to solve this problem. But it raised a discussion and I decided that it's not a reason to block the whole feature. https://patchwork.kernel.org/project/qemu-devel/patch/20210906190654.183421-3-vsementsov@virtuozzo.com/ > > Anyway, good for me. O:) > >> reconnect_delay_timer should exist only during IO request: it's created during request if we don't have a connection. And request will not finish until timer elapsed or connection established (timer should be removed in this case too). So, again, when attaching / detaching the context we should be in a drained sections, so no in-flight requests and no reconnect_delay_timer. > > Got it. FWIW, other block drivers rely on this, too (e.g. null-aio with latency-ns set creates a timer in every I/O request and settles the request once the timer expires). > >> >> So, I think assertions that both timer pointers are NULL should be enough in attach / detach handlers. >> > > Great! I’ll cook up v1. >
On 01.02.22 12:40, Hanna Reitz wrote: > On 01.02.22 12:18, Vladimir Sementsov-Ogievskiy wrote: >> 28.01.2022 18:51, Hanna Reitz wrote: >>> s->ioc must always be attached to the NBD node's AioContext. If that >>> context changes, s->ioc must be attached to the new context. >>> >>> Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1990835 >>> Signed-off-by: Hanna Reitz <hreitz@redhat.com> >>> --- >>> This is an RFC because I believe there are some other things in the NBD >>> block driver that need attention on an AioContext change, too. Namely, >>> there are two timers (reconnect_delay_timer and open_timer) that are >>> also attached to the node's AioContext, and I'm afraid they need to be >>> handled, too. Probably pause them on detach, and resume them on >>> attach, >>> but I'm not sure, which is why I'm posting this as an RFC to get some >>> comments from that from someone who knows this code better than me. :) >>> >>> (Also, in a real v1, of course I'd want to add a regression test.) >>> --- >>> block/nbd.c | 28 ++++++++++++++++++++++++++++ >>> 1 file changed, 28 insertions(+) >>> >>> diff --git a/block/nbd.c b/block/nbd.c >>> index 63dbfa807d..119a774c04 100644 >>> --- a/block/nbd.c >>> +++ b/block/nbd.c >>> @@ -2036,6 +2036,25 @@ static void >>> nbd_cancel_in_flight(BlockDriverState *bs) >>> nbd_co_establish_connection_cancel(s->conn); >>> } >>> +static void nbd_attach_aio_context(BlockDriverState *bs, >>> + AioContext *new_context) >>> +{ >>> + BDRVNBDState *s = bs->opaque; >>> + >>> + if (s->ioc) { >>> + qio_channel_attach_aio_context(s->ioc, new_context); >>> + } >>> +} >>> + >>> +static void nbd_detach_aio_context(BlockDriverState *bs) >>> +{ >>> + BDRVNBDState *s = bs->opaque; >>> + >>> + if (s->ioc) { >>> + qio_channel_detach_aio_context(s->ioc); >>> + } >>> +} >>> + >>> static BlockDriver bdrv_nbd = { >>> .format_name = "nbd", >>> .protocol_name = "nbd", >>> @@ -2059,6 +2078,9 @@ static BlockDriver bdrv_nbd = { >>> .bdrv_dirname = nbd_dirname, >>> .strong_runtime_opts = nbd_strong_runtime_opts, >>> .bdrv_cancel_in_flight = nbd_cancel_in_flight, >>> + >>> + .bdrv_attach_aio_context = nbd_attach_aio_context, >>> + .bdrv_detach_aio_context = nbd_detach_aio_context, >>> }; >>> static BlockDriver bdrv_nbd_tcp = { >>> @@ -2084,6 +2106,9 @@ static BlockDriver bdrv_nbd_tcp = { >>> .bdrv_dirname = nbd_dirname, >>> .strong_runtime_opts = nbd_strong_runtime_opts, >>> .bdrv_cancel_in_flight = nbd_cancel_in_flight, >>> + >>> + .bdrv_attach_aio_context = nbd_attach_aio_context, >>> + .bdrv_detach_aio_context = nbd_detach_aio_context, >>> }; >>> static BlockDriver bdrv_nbd_unix = { >>> @@ -2109,6 +2134,9 @@ static BlockDriver bdrv_nbd_unix = { >>> .bdrv_dirname = nbd_dirname, >>> .strong_runtime_opts = nbd_strong_runtime_opts, >>> .bdrv_cancel_in_flight = nbd_cancel_in_flight, >>> + >>> + .bdrv_attach_aio_context = nbd_attach_aio_context, >>> + .bdrv_detach_aio_context = nbd_detach_aio_context, >>> }; >>> static void bdrv_nbd_init(void) >>> >> >> >> Hmm. I was so happy to remove these handlers together with >> connection-coroutine :) . But you are right, seems I've removed too >> much :(. >> >> >> open_timer exists only during bdrv_open() handler, so, I hope on >> attach/detach it should not exist. > > That’s… kind of surprising. It’s good for me here, but as far as I > can see it means that all of qemu blocks until the connection > succeeds, right? That doesn’t seem quite ideal... > > Anyway, good for me. O:) > >> reconnect_delay_timer should exist only during IO request: it's >> created during request if we don't have a connection. And request >> will not finish until timer elapsed or connection established (timer >> should be removed in this case too). So, again, when attaching / >> detaching the context we should be in a drained sections, so no >> in-flight requests and no reconnect_delay_timer. > > Got it. FWIW, other block drivers rely on this, too (e.g. null-aio > with latency-ns set creates a timer in every I/O request and settles > the request once the timer expires). Looks like the timer isn’t removed when the connection is reestablished. When I add an `assert(!s->reconnect_delay_timer)` to `nbd_attach_aio_context()` (on top of this patch), then I get: $ ./qemu-nbd \ --fork \ --pid-file=/tmp/nbd.pid \ --socket=/tmp/nbd.sock \ -f raw \ null-co:// $ (echo '{"execute": "qmp_capabilities"}'; sleep 1; kill $(cat /tmp/nbd.pid); ./qemu-nbd \ --fork \ --pid-file=/tmp/nbd.pid \ --socket=/tmp/nbd.sock \ -f raw \ null-co://; echo '{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io nbd \"write 0 64k\""}}'; echo '{"execute": "x-blockdev-set-iothread", "arguments": {"node-name": "nbd", "iothread": "iothr0"}}') \ | ./qemu-system-x86_64 \ -qmp stdio \ -blockdev '{ "node-name": "nbd", "driver": "nbd", "reconnect-delay": 1, "server": { "type": "unix", "path": "/tmp/nbd.sock" } }' \ -object iothread,id=iothr0 {"QMP": {"version": {"qemu": {"micro": 50, "minor": 2, "major": 6}, "package": "v6.2.0-1288-ge3116c38f7-dirty"}, "capabilities": ["oob"]}} {"return": {}} wrote 65536/65536 bytes at offset 0 64 KiB, 1 ops; 00.00 sec (170.326 MiB/sec and 2725.2189 ops/sec) {"return": ""} qemu-system-x86_64: ../block/nbd.c:2044: nbd_attach_aio_context: Assertion `!s->reconnect_delay_timer' failed. Aborted (core dumped) (The above kills the NBD server and immediately starts it, so that the following write request will have to reconnect, and immediately succeed. The failed assertion when changing the AioContext shows that the timer is still there after successfully reconnecting.) Not sure whether that’s a problem in normal operation. On master, there’s no failure, of course, the only problem is that `reconnect_delay_timer_cb()` will probably be run in the old context. If in the new context we then have a concurrent reconnection attempt, perhaps the `reconnect_delay_timer_del()` might interfere with `reconnect_delay_timer_init()`, such that the former frees the timer (and sets it to NULL), and then the `timer_mod()` call in the latter function accesses NULL. But that’d be extremely difficult to test, because that’s a very small time window... I can definitely see the following problem with this RFC patch applied, though I don’t quite understand it: ./qemu-nbd \ --fork \ --pid-file=/tmp/nbd.pid \ --socket=/tmp/nbd.sock \ -f raw \ null-co:// (echo '{"execute": "qmp_capabilities"}'; sleep 1; kill $(cat /tmp/nbd.pid); ./qemu-nbd \ --fork \ --pid-file=/tmp/nbd.pid \ --socket=/tmp/nbd.sock \ -f raw \ null-co://; echo '{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io nbd \"write 0 64k\""}}'; echo '{"execute": "x-blockdev-set-iothread", "arguments": {"node-name": "nbd", "iothread": "iothr0"}}'; sleep 2; kill $(cat /tmp/nbd.pid); ./qemu-nbd \ --fork \ --pid-file=/tmp/nbd.pid \ --socket=/tmp/nbd.sock \ -f raw \ null-co://; echo '{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io nbd \"write 0 64k\""}}'; echo '{"execute": "quit"}') \ | ./qemu-system-x86_64 \ -qmp stdio \ -blockdev '{ "node-name": "nbd", "driver": "nbd", "reconnect-delay": 1, "server": { "type": "unix", "path": "/tmp/nbd.sock" } }' \ -object iothread,id=iothr0 {"QMP": {"version": {"qemu": {"micro": 50, "minor": 2, "major": 6}, "package": "v6.2.0-1129-g731bf9ede7"}, "capabilities": ["oob"]}} {"return": {}} wrote 65536/65536 bytes at offset 0 64 KiB, 1 ops; 00.00 sec (191.279 MiB/sec and 3060.4719 ops/sec) {"return": ""} {"return": {}} wrote 65536/65536 bytes at offset 0 64 KiB, 1 ops; 00.00 sec (159.672 MiB/sec and 2554.7483 ops/sec) {"return": ""} {"return": {}} {"timestamp": {"seconds": 1643731721, "microseconds": 22290}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}} qemu-system-x86_64: ../util/qemu-timer.c:115: timerlist_free: Assertion `!timerlist_has_timers(timer_list)' failed. Aborted (core dumped) I.e.: 1. Kill/restart the NBD server, as above, so that the reconnect on write succeeds immediately 2. Move the NBD server to a different AioContext 3. Wait two seconds, so that the reconnect timer expires 4. Repeat step 1, which will install a new reconnect timer 5. Have qemu quit before that new timer instance can expire I have tried stripping this down to just a single timer instance, but didn’t succeed. I always needed one instance expire in the original context, and then start another one in the new context.
01.02.2022 19:14, Hanna Reitz wrote: >>> reconnect_delay_timer should exist only during IO request: it's created during request if we don't have a connection. And request will not finish until timer elapsed or connection established (timer should be removed in this case too). So, again, when attaching / detaching the context we should be in a drained sections, so no in-flight requests and no reconnect_delay_timer. >> >> Got it. FWIW, other block drivers rely on this, too (e.g. null-aio with latency-ns set creates a timer in every I/O request and settles the request once the timer expires). > > Looks like the timer isn’t removed when the connection is reestablished. Oops. And that's wrong.. If connection lost again when old timer is still not fired, assertion in reconnect_delay_timer_init() will fail. I think we just need reconnect_delay_timer_del() call at the end of nbd_reconnect_attempt() function (after nbd_co_do_establish_connection() call).
diff --git a/block/nbd.c b/block/nbd.c index 63dbfa807d..119a774c04 100644 --- a/block/nbd.c +++ b/block/nbd.c @@ -2036,6 +2036,25 @@ static void nbd_cancel_in_flight(BlockDriverState *bs) nbd_co_establish_connection_cancel(s->conn); } +static void nbd_attach_aio_context(BlockDriverState *bs, + AioContext *new_context) +{ + BDRVNBDState *s = bs->opaque; + + if (s->ioc) { + qio_channel_attach_aio_context(s->ioc, new_context); + } +} + +static void nbd_detach_aio_context(BlockDriverState *bs) +{ + BDRVNBDState *s = bs->opaque; + + if (s->ioc) { + qio_channel_detach_aio_context(s->ioc); + } +} + static BlockDriver bdrv_nbd = { .format_name = "nbd", .protocol_name = "nbd", @@ -2059,6 +2078,9 @@ static BlockDriver bdrv_nbd = { .bdrv_dirname = nbd_dirname, .strong_runtime_opts = nbd_strong_runtime_opts, .bdrv_cancel_in_flight = nbd_cancel_in_flight, + + .bdrv_attach_aio_context = nbd_attach_aio_context, + .bdrv_detach_aio_context = nbd_detach_aio_context, }; static BlockDriver bdrv_nbd_tcp = { @@ -2084,6 +2106,9 @@ static BlockDriver bdrv_nbd_tcp = { .bdrv_dirname = nbd_dirname, .strong_runtime_opts = nbd_strong_runtime_opts, .bdrv_cancel_in_flight = nbd_cancel_in_flight, + + .bdrv_attach_aio_context = nbd_attach_aio_context, + .bdrv_detach_aio_context = nbd_detach_aio_context, }; static BlockDriver bdrv_nbd_unix = { @@ -2109,6 +2134,9 @@ static BlockDriver bdrv_nbd_unix = { .bdrv_dirname = nbd_dirname, .strong_runtime_opts = nbd_strong_runtime_opts, .bdrv_cancel_in_flight = nbd_cancel_in_flight, + + .bdrv_attach_aio_context = nbd_attach_aio_context, + .bdrv_detach_aio_context = nbd_detach_aio_context, }; static void bdrv_nbd_init(void)
s->ioc must always be attached to the NBD node's AioContext. If that context changes, s->ioc must be attached to the new context. Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1990835 Signed-off-by: Hanna Reitz <hreitz@redhat.com> --- This is an RFC because I believe there are some other things in the NBD block driver that need attention on an AioContext change, too. Namely, there are two timers (reconnect_delay_timer and open_timer) that are also attached to the node's AioContext, and I'm afraid they need to be handled, too. Probably pause them on detach, and resume them on attach, but I'm not sure, which is why I'm posting this as an RFC to get some comments from that from someone who knows this code better than me. :) (Also, in a real v1, of course I'd want to add a regression test.) --- block/nbd.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+)