Message ID | 20250123142301.409846-2-axboe@kernel.dk (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Cleanup alloc cache init_once handling | expand |
On 1/23/25 14:21, Jens Axboe wrote: > A few spots in uring_cmd assume that the SQEs copied are always at the > start of the structure, and hence mix req->async_data and the struct > itself. > > Clean that up and use the proper indices. > > Signed-off-by: Jens Axboe <axboe@kernel.dk> > --- > io_uring/uring_cmd.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c > index 3993c9339ac7..6a63ec4b5445 100644 > --- a/io_uring/uring_cmd.c > +++ b/io_uring/uring_cmd.c > @@ -192,8 +192,8 @@ static int io_uring_cmd_prep_setup(struct io_kiocb *req, > return 0; > } > > - memcpy(req->async_data, sqe, uring_sqe_size(req->ctx)); > - ioucmd->sqe = req->async_data; > + memcpy(cache->sqes, sqe, uring_sqe_size(req->ctx)); > + ioucmd->sqe = cache->sqes; > return 0; > } > > @@ -260,7 +260,7 @@ int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags) > struct io_uring_cmd_data *cache = req->async_data; > > if (ioucmd->sqe != (void *) cache) > - memcpy(cache, ioucmd->sqe, uring_sqe_size(req->ctx)); > + memcpy(cache->sqes, ioucmd->sqe, uring_sqe_size(req->ctx)); 3347fa658a1b ("io_uring/cmd: add per-op data to struct io_uring_cmd_data") IIUC the patch above is queued for 6.14, and with that this patch looks like a fix? At least it feels pretty dangerous without.
On 1/23/25 7:38 AM, Pavel Begunkov wrote: > On 1/23/25 14:21, Jens Axboe wrote: >> A few spots in uring_cmd assume that the SQEs copied are always at the >> start of the structure, and hence mix req->async_data and the struct >> itself. >> >> Clean that up and use the proper indices. >> >> Signed-off-by: Jens Axboe <axboe@kernel.dk> >> --- >> io_uring/uring_cmd.c | 6 +++--- >> 1 file changed, 3 insertions(+), 3 deletions(-) >> >> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c >> index 3993c9339ac7..6a63ec4b5445 100644 >> --- a/io_uring/uring_cmd.c >> +++ b/io_uring/uring_cmd.c >> @@ -192,8 +192,8 @@ static int io_uring_cmd_prep_setup(struct io_kiocb *req, >> return 0; >> } >> - memcpy(req->async_data, sqe, uring_sqe_size(req->ctx)); >> - ioucmd->sqe = req->async_data; >> + memcpy(cache->sqes, sqe, uring_sqe_size(req->ctx)); >> + ioucmd->sqe = cache->sqes; >> return 0; >> } >> @@ -260,7 +260,7 @@ int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags) >> struct io_uring_cmd_data *cache = req->async_data; >> if (ioucmd->sqe != (void *) cache) >> - memcpy(cache, ioucmd->sqe, uring_sqe_size(req->ctx)); >> + memcpy(cache->sqes, ioucmd->sqe, uring_sqe_size(req->ctx)); > > 3347fa658a1b ("io_uring/cmd: add per-op data to struct io_uring_cmd_data") > > IIUC the patch above is queued for 6.14, and with that this patch > looks like a fix? At least it feels pretty dangerous without. It's not a fix, the sqes are first in the struct even with that patch. So I'd consider it a cleanup. In any case, targeting 6.14 for these alloc cache cleanups as it got introduced there as well.
On 1/23/25 14:54, Jens Axboe wrote: > On 1/23/25 7:38 AM, Pavel Begunkov wrote: >> On 1/23/25 14:21, Jens Axboe wrote: >>> A few spots in uring_cmd assume that the SQEs copied are always at the >>> start of the structure, and hence mix req->async_data and the struct >>> itself. >>> >>> Clean that up and use the proper indices. >>> >>> Signed-off-by: Jens Axboe <axboe@kernel.dk> >>> --- >>> io_uring/uring_cmd.c | 6 +++--- >>> 1 file changed, 3 insertions(+), 3 deletions(-) >>> >>> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c >>> index 3993c9339ac7..6a63ec4b5445 100644 >>> --- a/io_uring/uring_cmd.c >>> +++ b/io_uring/uring_cmd.c >>> @@ -192,8 +192,8 @@ static int io_uring_cmd_prep_setup(struct io_kiocb *req, >>> return 0; >>> } >>> - memcpy(req->async_data, sqe, uring_sqe_size(req->ctx)); >>> - ioucmd->sqe = req->async_data; >>> + memcpy(cache->sqes, sqe, uring_sqe_size(req->ctx)); >>> + ioucmd->sqe = cache->sqes; >>> return 0; >>> } >>> @@ -260,7 +260,7 @@ int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags) >>> struct io_uring_cmd_data *cache = req->async_data; >>> if (ioucmd->sqe != (void *) cache) >>> - memcpy(cache, ioucmd->sqe, uring_sqe_size(req->ctx)); >>> + memcpy(cache->sqes, ioucmd->sqe, uring_sqe_size(req->ctx)); >> >> 3347fa658a1b ("io_uring/cmd: add per-op data to struct io_uring_cmd_data") >> >> IIUC the patch above is queued for 6.14, and with that this patch >> looks like a fix? At least it feels pretty dangerous without. > > It's not a fix, the sqes are first in the struct even with that patch. Ah yes > So I'd consider it a cleanup. In any case, targeting 6.14 for these > alloc cache cleanups as it got introduced there as well. That's good, makes it not that brittle
On 1/23/25 7:57 AM, Pavel Begunkov wrote: > On 1/23/25 14:54, Jens Axboe wrote: >> On 1/23/25 7:38 AM, Pavel Begunkov wrote: >>> On 1/23/25 14:21, Jens Axboe wrote: >>>> A few spots in uring_cmd assume that the SQEs copied are always at the >>>> start of the structure, and hence mix req->async_data and the struct >>>> itself. >>>> >>>> Clean that up and use the proper indices. >>>> >>>> Signed-off-by: Jens Axboe <axboe@kernel.dk> >>>> --- >>>> io_uring/uring_cmd.c | 6 +++--- >>>> 1 file changed, 3 insertions(+), 3 deletions(-) >>>> >>>> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c >>>> index 3993c9339ac7..6a63ec4b5445 100644 >>>> --- a/io_uring/uring_cmd.c >>>> +++ b/io_uring/uring_cmd.c >>>> @@ -192,8 +192,8 @@ static int io_uring_cmd_prep_setup(struct io_kiocb *req, >>>> return 0; >>>> } >>>> - memcpy(req->async_data, sqe, uring_sqe_size(req->ctx)); >>>> - ioucmd->sqe = req->async_data; >>>> + memcpy(cache->sqes, sqe, uring_sqe_size(req->ctx)); >>>> + ioucmd->sqe = cache->sqes; >>>> return 0; >>>> } >>>> @@ -260,7 +260,7 @@ int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags) >>>> struct io_uring_cmd_data *cache = req->async_data; >>>> if (ioucmd->sqe != (void *) cache) >>>> - memcpy(cache, ioucmd->sqe, uring_sqe_size(req->ctx)); >>>> + memcpy(cache->sqes, ioucmd->sqe, uring_sqe_size(req->ctx)); >>> >>> 3347fa658a1b ("io_uring/cmd: add per-op data to struct io_uring_cmd_data") >>> >>> IIUC the patch above is queued for 6.14, and with that this patch >>> looks like a fix? At least it feels pretty dangerous without. >> >> It's not a fix, the sqes are first in the struct even with that patch. > > Ah yes > >> So I'd consider it a cleanup. In any case, targeting 6.14 for these >> alloc cache cleanups as it got introduced there as well. > > That's good, makes it not that brittle Yep it was too easy to miss, don't like them being aliased like that even if the usage was currently fine.
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index 3993c9339ac7..6a63ec4b5445 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -192,8 +192,8 @@ static int io_uring_cmd_prep_setup(struct io_kiocb *req, return 0; } - memcpy(req->async_data, sqe, uring_sqe_size(req->ctx)); - ioucmd->sqe = req->async_data; + memcpy(cache->sqes, sqe, uring_sqe_size(req->ctx)); + ioucmd->sqe = cache->sqes; return 0; } @@ -260,7 +260,7 @@ int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags) struct io_uring_cmd_data *cache = req->async_data; if (ioucmd->sqe != (void *) cache) - memcpy(cache, ioucmd->sqe, uring_sqe_size(req->ctx)); + memcpy(cache->sqes, ioucmd->sqe, uring_sqe_size(req->ctx)); return -EAGAIN; } else if (ret == -EIOCBQUEUED) { return -EIOCBQUEUED;
A few spots in uring_cmd assume that the SQEs copied are always at the start of the structure, and hence mix req->async_data and the struct itself. Clean that up and use the proper indices. Signed-off-by: Jens Axboe <axboe@kernel.dk> --- io_uring/uring_cmd.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)