Message ID | 20221116122241.2856527-2-eesposit@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Still more coroutine and various fixes in block layer | expand |
Am 16.11.2022 um 13:22 hat Emanuele Giuseppe Esposito geschrieben: > These functions end up calling bdrv_common_block_status_above(), a > generated_co_wrapper function. > In addition, they also happen to be always called in coroutine context, > meaning all callers are coroutine_fn. > This means that the g_c_w function will enter the qemu_in_coroutine() > case and eventually suspend (or in other words call qemu_coroutine_yield()). > Therefore we need to mark such functions coroutine_fn too. > > Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Ideally, we'd convert them to new wrappers bdrv_co_is_allocated() and bdrv_co_block_status_above() instead of having to argue that they always take the coroutine path in g_c_w. > diff --git a/block/block-copy.c b/block/block-copy.c > index bb947afdda..f33ab1d0b6 100644 > --- a/block/block-copy.c > +++ b/block/block-copy.c > @@ -669,8 +671,9 @@ void block_copy_reset(BlockCopyState *s, int64_t offset, int64_t bytes) > * @return 0 when the cluster at @offset was unallocated, > * 1 otherwise, and -ret on error. > */ > -int64_t block_copy_reset_unallocated(BlockCopyState *s, > - int64_t offset, int64_t *count) > +int64_t coroutine_fn block_copy_reset_unallocated(BlockCopyState *s, > + int64_t offset, > + int64_t *count) > { > int ret; > int64_t clusters, bytes; This one is a public function. Its prototype in block-copy.h should be updated, too. Kevin
Am 18/11/2022 um 20:05 schrieb Kevin Wolf: > Am 16.11.2022 um 13:22 hat Emanuele Giuseppe Esposito geschrieben: >> These functions end up calling bdrv_common_block_status_above(), a >> generated_co_wrapper function. >> In addition, they also happen to be always called in coroutine context, >> meaning all callers are coroutine_fn. >> This means that the g_c_w function will enter the qemu_in_coroutine() >> case and eventually suspend (or in other words call qemu_coroutine_yield()). >> Therefore we need to mark such functions coroutine_fn too. >> >> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> > > Ideally, we'd convert them to new wrappers bdrv_co_is_allocated() and > bdrv_co_block_status_above() instead of having to argue that they always > take the coroutine path in g_c_w. > Ok so basically I should introduce bdrv_co_is_allocated, because so far in this and next series I never thought about creating it. Since these functions will be eventually split anyways, I agree let's start doing this now. Thank you, Emanuele
Am 21/11/2022 um 09:32 schrieb Emanuele Giuseppe Esposito: > > > Am 18/11/2022 um 20:05 schrieb Kevin Wolf: >> Am 16.11.2022 um 13:22 hat Emanuele Giuseppe Esposito geschrieben: >>> These functions end up calling bdrv_common_block_status_above(), a >>> generated_co_wrapper function. >>> In addition, they also happen to be always called in coroutine context, >>> meaning all callers are coroutine_fn. >>> This means that the g_c_w function will enter the qemu_in_coroutine() >>> case and eventually suspend (or in other words call qemu_coroutine_yield()). >>> Therefore we need to mark such functions coroutine_fn too. >>> >>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> >> >> Ideally, we'd convert them to new wrappers bdrv_co_is_allocated() and >> bdrv_co_block_status_above() instead of having to argue that they always >> take the coroutine path in g_c_w. >> > > Ok so basically I should introduce bdrv_co_is_allocated, because so far > in this and next series I never thought about creating it. > Since these functions will be eventually split anyways, I agree let's > start doing this now. Actually bdrv_is_allocated would be a g_c_w functions in itself, that calls another g_c_w and it is probably called by functions that are or will be g_c_w. Is this actually the scope of this series? I think switching this specific function and its callers or similar will require a lot of efforts, and if I do it here it won't cover all the cases for sure. Wouldn't it be better to do these kind of things in a different serie using Paolo's vrc tool? > Thank you, > Emanuele >
Am 21.11.2022 um 09:51 hat Emanuele Giuseppe Esposito geschrieben: > > > Am 21/11/2022 um 09:32 schrieb Emanuele Giuseppe Esposito: > > > > > > Am 18/11/2022 um 20:05 schrieb Kevin Wolf: > >> Am 16.11.2022 um 13:22 hat Emanuele Giuseppe Esposito geschrieben: > >>> These functions end up calling bdrv_common_block_status_above(), a > >>> generated_co_wrapper function. > >>> In addition, they also happen to be always called in coroutine context, > >>> meaning all callers are coroutine_fn. > >>> This means that the g_c_w function will enter the qemu_in_coroutine() > >>> case and eventually suspend (or in other words call qemu_coroutine_yield()). > >>> Therefore we need to mark such functions coroutine_fn too. > >>> > >>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> > >> > >> Ideally, we'd convert them to new wrappers bdrv_co_is_allocated() and > >> bdrv_co_block_status_above() instead of having to argue that they always > >> take the coroutine path in g_c_w. > > > > Ok so basically I should introduce bdrv_co_is_allocated, because so far > > in this and next series I never thought about creating it. > > Since these functions will be eventually split anyways, I agree let's > > start doing this now. > > Actually bdrv_is_allocated would be a g_c_w functions in itself, that > calls another g_c_w and it is probably called by functions that are or > will be g_c_w. I'm not sure if I understand. bdrv_is_allocated() is essentially a g_c_w function today, just indirectly. But we have callers that know that they are running in a coroutine (which is why you're adding coroutine_fn to them), so they shouldn't call a g_c_w function, but directly the coroutine version of the function. The only reason why you can't currently do that is that bdrv_is_allocated() exists as a wrapper around the g_c_w function bdrv_common_block_status_above(), but the same wrapper doesn't exist for the pure coroutine version bdrv_co_common_block_status_above(). All I'm suggesting is introducing a bdrv_co_is_allocated() that is a wrapper directly around bdrv_co_common_block_status_above(), so that the functions you're marking as coroutine_fn can use it instead of calling g_c_w. This should be about 10 lines of code. I'm not implying that you should convert any other callers in this patch, or that you should touch bdrv_is_allocated() at all. > Is this actually the scope of this series? I think switching this > specific function and its callers or similar will require a lot of > efforts, and if I do it here it won't cover all the cases for sure. > > Wouldn't it be better to do these kind of things in a different serie > using Paolo's vrc tool? I'm not sure what the scope of this series is, because you already do introduce new wrappers in other patches of the series. I assumed it's just to improve the situation a little, with no claim of being exhaustive. Finding and fully converting all callers might indeed be a job for something like vrc, but here I'm only looking at local consistency in functions where you're adding coroutine_fn. Kevin
Am 21/11/2022 um 12:50 schrieb Kevin Wolf: > Am 21.11.2022 um 09:51 hat Emanuele Giuseppe Esposito geschrieben: >> >> >> Am 21/11/2022 um 09:32 schrieb Emanuele Giuseppe Esposito: >>> >>> >>> Am 18/11/2022 um 20:05 schrieb Kevin Wolf: >>>> Am 16.11.2022 um 13:22 hat Emanuele Giuseppe Esposito geschrieben: >>>>> These functions end up calling bdrv_common_block_status_above(), a >>>>> generated_co_wrapper function. >>>>> In addition, they also happen to be always called in coroutine context, >>>>> meaning all callers are coroutine_fn. >>>>> This means that the g_c_w function will enter the qemu_in_coroutine() >>>>> case and eventually suspend (or in other words call qemu_coroutine_yield()). >>>>> Therefore we need to mark such functions coroutine_fn too. >>>>> >>>>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> >>>> >>>> Ideally, we'd convert them to new wrappers bdrv_co_is_allocated() and >>>> bdrv_co_block_status_above() instead of having to argue that they always >>>> take the coroutine path in g_c_w. >>> >>> Ok so basically I should introduce bdrv_co_is_allocated, because so far >>> in this and next series I never thought about creating it. >>> Since these functions will be eventually split anyways, I agree let's >>> start doing this now. >> >> Actually bdrv_is_allocated would be a g_c_w functions in itself, that >> calls another g_c_w and it is probably called by functions that are or >> will be g_c_w. > > I'm not sure if I understand. bdrv_is_allocated() is essentially a g_c_w > function today, just indirectly. But we have callers that know that they > are running in a coroutine (which is why you're adding coroutine_fn to > them), so they shouldn't call a g_c_w function, but directly the > coroutine version of the function. > > The only reason why you can't currently do that is that > bdrv_is_allocated() exists as a wrapper around the g_c_w function > bdrv_common_block_status_above(), but the same wrapper doesn't exist for > the pure coroutine version bdrv_co_common_block_status_above(). > > All I'm suggesting is introducing a bdrv_co_is_allocated() that is a > wrapper directly around bdrv_co_common_block_status_above(), so that > the functions you're marking as coroutine_fn can use it instead of > calling g_c_w. This should be about 10 lines of code. > > I'm not implying that you should convert any other callers in this > patch, or that you should touch bdrv_is_allocated() at all. > >> Is this actually the scope of this series? I think switching this >> specific function and its callers or similar will require a lot of >> efforts, and if I do it here it won't cover all the cases for sure. >> >> Wouldn't it be better to do these kind of things in a different serie >> using Paolo's vrc tool? > > I'm not sure what the scope of this series is, because you already do > introduce new wrappers in other patches of the series. I assumed it's > just to improve the situation a little, with no claim of being > exhaustive. > > Finding and fully converting all callers might indeed be a job for > something like vrc, but here I'm only looking at local consistency in > functions where you're adding coroutine_fn. > Oh ok now I see what you mean. I was thinking (and did in "[PATCH 04/15] block: convert bdrv_refresh_total_sectors in generated_co_wrapper") to instead convert all callers in g_c_w, and that ended up being a big pain. I'll also correct the patch I mentioned above. Thank you, Emanuele
diff --git a/block/block-copy.c b/block/block-copy.c index bb947afdda..f33ab1d0b6 100644 --- a/block/block-copy.c +++ b/block/block-copy.c @@ -577,8 +577,9 @@ static coroutine_fn int block_copy_task_entry(AioTask *task) return ret; } -static int block_copy_block_status(BlockCopyState *s, int64_t offset, - int64_t bytes, int64_t *pnum) +static coroutine_fn int block_copy_block_status(BlockCopyState *s, + int64_t offset, + int64_t bytes, int64_t *pnum) { int64_t num; BlockDriverState *base; @@ -613,8 +614,9 @@ static int block_copy_block_status(BlockCopyState *s, int64_t offset, * Check if the cluster starting at offset is allocated or not. * return via pnum the number of contiguous clusters sharing this allocation. */ -static int block_copy_is_cluster_allocated(BlockCopyState *s, int64_t offset, - int64_t *pnum) +static int coroutine_fn block_copy_is_cluster_allocated(BlockCopyState *s, + int64_t offset, + int64_t *pnum) { BlockDriverState *bs = s->source->bs; int64_t count, total_count = 0; @@ -669,8 +671,9 @@ void block_copy_reset(BlockCopyState *s, int64_t offset, int64_t bytes) * @return 0 when the cluster at @offset was unallocated, * 1 otherwise, and -ret on error. */ -int64_t block_copy_reset_unallocated(BlockCopyState *s, - int64_t offset, int64_t *count) +int64_t coroutine_fn block_copy_reset_unallocated(BlockCopyState *s, + int64_t offset, + int64_t *count) { int ret; int64_t clusters, bytes;
These functions end up calling bdrv_common_block_status_above(), a generated_co_wrapper function. In addition, they also happen to be always called in coroutine context, meaning all callers are coroutine_fn. This means that the g_c_w function will enter the qemu_in_coroutine() case and eventually suspend (or in other words call qemu_coroutine_yield()). Therefore we need to mark such functions coroutine_fn too. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> --- block/block-copy.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)