Message ID | 20241025122247.3709133-1-ming.lei@redhat.com (mailing list archive) |
---|---|
Headers | show |
Series | io_uring: support sqe group and leased group kbuf | expand |
On 10/25/24 13:22, Ming Lei wrote: > The 1st 3 patches are cleanup, and prepare for adding sqe group. > > The 4th patch supports generic sqe group which is like link chain, but > allows each sqe in group to be issued in parallel and the group shares > same IO_LINK & IO_DRAIN boundary, so N:M dependency can be supported with > sqe group & io link together. > > The 5th & 6th patches supports to lease other subsystem's kbuf to > io_uring for use in sqe group wide. > > The 7th patch supports ublk zero copy based on io_uring sqe group & > leased kbuf. > > Tests: > > 1) pass liburing test > - make runtests > > 2) write/pass sqe group test case and sqe provide buffer case: > > https://github.com/ming1/liburing/tree/uring_group > > - covers related sqe flags combination and linking groups, both nop and > one multi-destination file copy. > > - cover failure handling test: fail leader IO or member IO in both single > group and linked groups, which is done in each sqe flags combination > test > > - cover io_uring with leased group kbuf by adding ublk-loop-zc To make my position clear, I think the table approach will turn much better API-wise if the performance suffices, and we can only know that experimentally. I tried that idea with sockets back then, and it was looking well. It'd be great if someone tries to implement and compare it, though I don't believe I should be trying it, so maybe Ming or Jens can, especially since Jens already posted a couple series for problems standing in the way, i.e global rsrc nodes and late buffer binding. In any case, I'm not opposing to the series if Jens decides to merge it.
On 10/29/24 11:01 AM, Pavel Begunkov wrote: > On 10/25/24 13:22, Ming Lei wrote: >> The 1st 3 patches are cleanup, and prepare for adding sqe group. >> >> The 4th patch supports generic sqe group which is like link chain, but >> allows each sqe in group to be issued in parallel and the group shares >> same IO_LINK & IO_DRAIN boundary, so N:M dependency can be supported with >> sqe group & io link together. >> >> The 5th & 6th patches supports to lease other subsystem's kbuf to >> io_uring for use in sqe group wide. >> >> The 7th patch supports ublk zero copy based on io_uring sqe group & >> leased kbuf. >> >> Tests: >> >> 1) pass liburing test >> - make runtests >> >> 2) write/pass sqe group test case and sqe provide buffer case: >> >> https://github.com/ming1/liburing/tree/uring_group >> >> - covers related sqe flags combination and linking groups, both nop and >> one multi-destination file copy. >> >> - cover failure handling test: fail leader IO or member IO in both single >> group and linked groups, which is done in each sqe flags combination >> test >> >> - cover io_uring with leased group kbuf by adding ublk-loop-zc > > To make my position clear, I think the table approach will turn > much better API-wise if the performance suffices, and we can only know > that experimentally. I tried that idea with sockets back then, and it > was looking well. It'd be great if someone tries to implement and > compare it, though I don't believe I should be trying it, so maybe Ming > or Jens can, especially since Jens already posted a couple series for > problems standing in the way, i.e global rsrc nodes and late buffer > binding. In any case, I'm not opposing to the series if Jens decides to > merge it. With the rsrc node stuff sorted out, I was thinking last night that I should take another look at this. While that work was (mostly) done because of the lingering closes, it does nicely enable ephemeral buffers too. I'll take a stab at it... While I would love to make progress on this feature proposed in this series, it's arguably more important to do it in such a way that we can live with it, long term.
On 10/29/24 11:04 AM, Jens Axboe wrote: >> To make my position clear, I think the table approach will turn >> much better API-wise if the performance suffices, and we can only know >> that experimentally. I tried that idea with sockets back then, and it >> was looking well. It'd be great if someone tries to implement and >> compare it, though I don't believe I should be trying it, so maybe Ming >> or Jens can, especially since Jens already posted a couple series for >> problems standing in the way, i.e global rsrc nodes and late buffer >> binding. In any case, I'm not opposing to the series if Jens decides to >> merge it. > > With the rsrc node stuff sorted out, I was thinking last night that I > should take another look at this. While that work was (mostly) done > because of the lingering closes, it does nicely enable ephemeral buffers > too. > > I'll take a stab at it... While I would love to make progress on this > feature proposed in this series, it's arguably more important to do it > in such a way that we can live with it, long term. Ming, here's another stab at this, see attached patch. It adds a LOCAL_BUF opcode, which maps a user provided buffer to a io_rsrc_node that opcodes can then use. The buffer is visible ONLY within a given submission - in other words, only within a single io_uring_submit() call. The buffer provided is done so at prep time, which means you don't need to serialize with the LOCAL_BUF op itself. You can do: sqe = io_uring_get_sqe(ring); io_uring_prep_local_buf(sqe, buffer, length, tag); sqe = io_uring_get_sqe(ring); io_uring_prep_whatever_op_fixed(sqe, buffer, length, foo); and have 'whatever' rely on the buffer either being there to use, or the import failing with -EFAULT. No IOSQE_IO_LINK or similar is needed. Obviously if you do: sqe = io_uring_get_sqe(ring); io_uring_prep_local_buf(sqe, buffer, length, tag); sqe = io_uring_get_sqe(ring); io_uring_prep_read_thing_fixed(sqe, buffer, length, foo); sqe->flags |= IOSQE_IO_LINK; sqe = io_uring_get_sqe(ring); io_uring_prep_write_thing_fixed(sqe, buffer, length, foo); then the filling of the buffer and whoever uses the filled buffer will need to be serialized, to ensure the buffer content is valid for the write. Any opcode using the ephemeral/local buffer will need to grab a reference to it, just like what is done for normal registered buffers. If assigned to req->rsrc_node, then it'll be put as part of normal completion. Hence no special handling is needed for this. The reference that submit holds is assigned by LOCAL_BUF, and will be put when submission ends. Hence no requirement that opcodes finish before submit ends, they have their own ref. All of that should make sense, I think. I'm attaching the most basic of test apps I wrote to test this, as well as using: diff --git a/io_uring/rw.c b/io_uring/rw.c index 30448f343c7f..89662f305342 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -338,7 +338,10 @@ static int io_prep_rw_fixed(struct io_kiocb *req, const struct io_uring_sqe *sqe if (unlikely(ret)) return ret; - node = io_rsrc_node_lookup(&ctx->buf_table, req->buf_index); + if (ctx->submit_state.rsrc_node != rsrc_empty_node) + node = ctx->submit_state.rsrc_node; + else + node = io_rsrc_node_lookup(&ctx->buf_table, req->buf_index); if (!node) return -EFAULT; io_req_assign_rsrc_node(req, node); just so I could test it with normal read/write fixed and do a zero copy read/write operation where the write file ends up with the data from the read file. When you run the test app, you should see: axboe@m2max-kvm ~/g/liburing (reg-wait)> test/local-buf.t buffer 0xaaaada34a000 res=0, ud=0x1 res=4096, ud=0x2 res=4096, ud=0x3 res=0, ud=0xaaaada34a000 which shows LOCAL_BUF completing first, then a 4k read, then a 4k write, and finally the notification for the buffer being done. The test app sets up the tag to be the buffer address, could obviously be anything you want. Now, this implementation requires a user buffer, and as far as I'm told, you currently have kernel buffers on the ublk side. There's absolutely no reason why kernel buffers cannot work, we'd most likely just need to add a IORING_RSRC_KBUFFER type to handle that. My question here is how hard is this requirement? Reason I ask is that it's much simpler to work with userspace buffers. Yes the current implementation maps them everytime, we could certainly change that, however I don't see this being an issue. It's really no different than O_DIRECT, and you only need to map them once for a read + whatever number of writes you'd need to do. If a 'tag' is provided for LOCAL_BUF, it'll post a CQE whenever that buffer is unmapped. This is a notification for the application that it's done using the buffer. For a pure kernel buffer, we'd either need to be able to reference it (so that we KNOW it's not going away) and/or have a callback associated with the buffer. Would it be possible for ublk to require the user side to register a range of memory that should be used for the write buffers, such that they could be mapped in the kernel instead? Maybe this memory is already registered as such? I don't know all the details of the ublk zero copy, but I would imagine there's some flexibility here in terms of how it gets setup. ublk would then need to add opcodes that utilize LOCAL_BUF for this, obviously. As it stands, with the patch, nobody can access these buffers, we'd need a READ_LOCAL_FIXED etc to have opcodes be able to access them. But that should be fine, you need specific opcodes for zero copy anyway. You can probably even reuse existing opcodes, and just add something like IORING_URING_CMD_LOCAL as a flag rather than IORING_URING_CMD_FIXED that is used now for registered buffers. Let me know what you think. Like I mentioned, this is just a rough patch. It does work though and it is safe, but obviously only does userspace memory right now. It sits on top of my io_uring-rsrc branch, which rewrites the rsrc handling.
On 10/29/24 1:18 PM, Jens Axboe wrote: > Now, this implementation requires a user buffer, and as far as I'm told, > you currently have kernel buffers on the ublk side. There's absolutely > no reason why kernel buffers cannot work, we'd most likely just need to > add a IORING_RSRC_KBUFFER type to handle that. My question here is how > hard is this requirement? Reason I ask is that it's much simpler to work > with userspace buffers. Yes the current implementation maps them > everytime, we could certainly change that, however I don't see this > being an issue. It's really no different than O_DIRECT, and you only > need to map them once for a read + whatever number of writes you'd need > to do. If a 'tag' is provided for LOCAL_BUF, it'll post a CQE whenever > that buffer is unmapped. This is a notification for the application that > it's done using the buffer. For a pure kernel buffer, we'd either need > to be able to reference it (so that we KNOW it's not going away) and/or > have a callback associated with the buffer. Just to expand on this - if a kernel buffer is absolutely required, for example if you're inheriting pages from the page cache or other locations you cannot control, we would need to add something ala the below: diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 9621ba533b35..b0258eb37681 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -474,6 +474,10 @@ void io_free_rsrc_node(struct io_rsrc_node *node) if (node->buf) io_buffer_unmap(node->ctx, node); break; + case IORING_RSRC_KBUFFER: + if (node->buf) + node->kbuf_fn(node->buf); + break; default: WARN_ON_ONCE(1); break; diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h index be9b490c400e..8d00460d47ff 100644 --- a/io_uring/rsrc.h +++ b/io_uring/rsrc.h @@ -11,6 +11,7 @@ enum { IORING_RSRC_FILE = 0, IORING_RSRC_BUFFER = 1, + IORING_RSRC_KBUFFER = 2, }; struct io_rsrc_node { @@ -19,6 +20,7 @@ struct io_rsrc_node { u16 type; u64 tag; + void (*kbuf_fn)(struct io_mapped_ubuf *); union { unsigned long file_ptr; struct io_mapped_ubuf *buf; and provide a helper that allocates an io_rsrc_node, sets it to type IORING_RSRC_KBUFFER, and assigns a ->kbuf_fn() that gets a callback when the final put of the node happens. Whatever ublk command that wants to do zero copy would call this helper at prep time and set the io_submit_state buffer to be used. Likewise, probably provide an io_rsrc helper that can be called by kbuf_fn as well to do final cleanup, so that the callback itself is only tasked with whatever it needs to do once it's received the data. For this to work, we'll absolutely need the provider to guarantee that the pages mapped will remain persistent until that callback is received. Or have a way to reference the data inside rsrc.c. I'm imagining this is just stacking the IO, so eg you get a read with some data already in there that you don't control, and you don't complete this read until some other IO is done. That other IO is what is using the buffer provided here. Anyway, just a suggestion if the user provided memory is a no-go, there are certainly ways we can make this trivially work with memory you cannot control that is received from inside the kernel, without a lot of additions.
On 10/29/24 2:06 PM, Jens Axboe wrote: > On 10/29/24 1:18 PM, Jens Axboe wrote: >> Now, this implementation requires a user buffer, and as far as I'm told, >> you currently have kernel buffers on the ublk side. There's absolutely >> no reason why kernel buffers cannot work, we'd most likely just need to >> add a IORING_RSRC_KBUFFER type to handle that. My question here is how >> hard is this requirement? Reason I ask is that it's much simpler to work >> with userspace buffers. Yes the current implementation maps them >> everytime, we could certainly change that, however I don't see this >> being an issue. It's really no different than O_DIRECT, and you only >> need to map them once for a read + whatever number of writes you'd need >> to do. If a 'tag' is provided for LOCAL_BUF, it'll post a CQE whenever >> that buffer is unmapped. This is a notification for the application that >> it's done using the buffer. For a pure kernel buffer, we'd either need >> to be able to reference it (so that we KNOW it's not going away) and/or >> have a callback associated with the buffer. > > Just to expand on this - if a kernel buffer is absolutely required, for > example if you're inheriting pages from the page cache or other > locations you cannot control, we would need to add something ala the > below: Here's a more complete one, but utterly untested. But it does the same thing, mapping a struct request, but it maps it to an io_rsrc_node which in turn has an io_mapped_ubuf in it. Both BUFFER and KBUFFER use the same type, only the destruction is different. Then the callback provided needs to do something ala: struct io_mapped_ubuf *imu = node->buf; if (imu && refcount_dec_and_test(&imu->refs)) kvfree(imu); when it's done with the imu. Probably an rsrc helper should just be done for that, but those are details. diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 9621ba533b35..050868a4c9f1 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -8,6 +8,8 @@ #include <linux/nospec.h> #include <linux/hugetlb.h> #include <linux/compat.h> +#include <linux/bvec.h> +#include <linux/blk-mq.h> #include <linux/io_uring.h> #include <uapi/linux/io_uring.h> @@ -474,6 +476,9 @@ void io_free_rsrc_node(struct io_rsrc_node *node) if (node->buf) io_buffer_unmap(node->ctx, node); break; + case IORING_RSRC_KBUFFER: + node->kbuf_fn(node); + break; default: WARN_ON_ONCE(1); break; @@ -1070,6 +1075,65 @@ int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg) return ret; } +struct io_rsrc_node *io_rsrc_map_request(struct io_ring_ctx *ctx, + struct request *req, + void (*kbuf_fn)(struct io_rsrc_node *)) +{ + struct io_mapped_ubuf *imu = NULL; + struct io_rsrc_node *node = NULL; + struct req_iterator rq_iter; + unsigned int offset; + struct bio_vec bv; + int nr_bvecs; + + if (!bio_has_data(req->bio)) + goto out; + + nr_bvecs = 0; + rq_for_each_bvec(bv, req, rq_iter) + nr_bvecs++; + if (!nr_bvecs) + goto out; + + node = io_rsrc_node_alloc(ctx, IORING_RSRC_KBUFFER); + if (!node) + goto out; + node->buf = NULL; + + imu = kvmalloc(struct_size(imu, bvec, nr_bvecs), GFP_NOIO); + if (!imu) + goto out; + + imu->ubuf = 0; + imu->len = 0; + if (req->bio != req->biotail) { + int idx = 0; + + offset = 0; + rq_for_each_bvec(bv, req, rq_iter) { + imu->bvec[idx++] = bv; + imu->len += bv.bv_len; + } + } else { + struct bio *bio = req->bio; + + offset = bio->bi_iter.bi_bvec_done; + imu->bvec[0] = *__bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter); + imu->len = imu->bvec[0].bv_len; + } + imu->nr_bvecs = nr_bvecs; + imu->folio_shift = PAGE_SHIFT; + refcount_set(&imu->refs, 1); + node->buf = imu; + node->kbuf_fn = kbuf_fn; + return node; +out: + if (node) + io_put_rsrc_node(node); + kfree(imu); + return NULL; +} + int io_local_buf_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { struct io_ring_ctx *ctx = req->ctx; diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h index be9b490c400e..8d479f765fe0 100644 --- a/io_uring/rsrc.h +++ b/io_uring/rsrc.h @@ -11,6 +11,7 @@ enum { IORING_RSRC_FILE = 0, IORING_RSRC_BUFFER = 1, + IORING_RSRC_KBUFFER = 2, }; struct io_rsrc_node { @@ -19,6 +20,7 @@ struct io_rsrc_node { u16 type; u64 tag; + void (*kbuf_fn)(struct io_rsrc_node *); union { unsigned long file_ptr; struct io_mapped_ubuf *buf; @@ -52,6 +54,10 @@ int io_import_fixed(int ddir, struct iov_iter *iter, struct io_mapped_ubuf *imu, u64 buf_addr, size_t len); +struct io_rsrc_node *io_rsrc_map_request(struct io_ring_ctx *ctx, + struct request *req, + void (*kbuf_fn)(struct io_rsrc_node *)); + int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg); int io_sqe_buffers_unregister(struct io_ring_ctx *ctx); int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg,
On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote: > On 10/29/24 2:06 PM, Jens Axboe wrote: > > On 10/29/24 1:18 PM, Jens Axboe wrote: > >> Now, this implementation requires a user buffer, and as far as I'm told, > >> you currently have kernel buffers on the ublk side. There's absolutely > >> no reason why kernel buffers cannot work, we'd most likely just need to > >> add a IORING_RSRC_KBUFFER type to handle that. My question here is how > >> hard is this requirement? Reason I ask is that it's much simpler to work > >> with userspace buffers. Yes the current implementation maps them > >> everytime, we could certainly change that, however I don't see this > >> being an issue. It's really no different than O_DIRECT, and you only > >> need to map them once for a read + whatever number of writes you'd need > >> to do. If a 'tag' is provided for LOCAL_BUF, it'll post a CQE whenever > >> that buffer is unmapped. This is a notification for the application that > >> it's done using the buffer. For a pure kernel buffer, we'd either need > >> to be able to reference it (so that we KNOW it's not going away) and/or > >> have a callback associated with the buffer. > > > > Just to expand on this - if a kernel buffer is absolutely required, for > > example if you're inheriting pages from the page cache or other > > locations you cannot control, we would need to add something ala the > > below: > > Here's a more complete one, but utterly untested. But it does the same > thing, mapping a struct request, but it maps it to an io_rsrc_node which > in turn has an io_mapped_ubuf in it. Both BUFFER and KBUFFER use the > same type, only the destruction is different. Then the callback provided > needs to do something ala: > > struct io_mapped_ubuf *imu = node->buf; > > if (imu && refcount_dec_and_test(&imu->refs)) > kvfree(imu); > > when it's done with the imu. Probably an rsrc helper should just be done > for that, but those are details. > > diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c > index 9621ba533b35..050868a4c9f1 100644 > --- a/io_uring/rsrc.c > +++ b/io_uring/rsrc.c > @@ -8,6 +8,8 @@ > #include <linux/nospec.h> > #include <linux/hugetlb.h> > #include <linux/compat.h> > +#include <linux/bvec.h> > +#include <linux/blk-mq.h> > #include <linux/io_uring.h> > > #include <uapi/linux/io_uring.h> > @@ -474,6 +476,9 @@ void io_free_rsrc_node(struct io_rsrc_node *node) > if (node->buf) > io_buffer_unmap(node->ctx, node); > break; > + case IORING_RSRC_KBUFFER: > + node->kbuf_fn(node); > + break; Here 'node' is freed later, and it may not work because ->imu is bound with node. > default: > WARN_ON_ONCE(1); > break; > @@ -1070,6 +1075,65 @@ int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg) > return ret; > } > > +struct io_rsrc_node *io_rsrc_map_request(struct io_ring_ctx *ctx, > + struct request *req, > + void (*kbuf_fn)(struct io_rsrc_node *)) > +{ > + struct io_mapped_ubuf *imu = NULL; > + struct io_rsrc_node *node = NULL; > + struct req_iterator rq_iter; > + unsigned int offset; > + struct bio_vec bv; > + int nr_bvecs; > + > + if (!bio_has_data(req->bio)) > + goto out; > + > + nr_bvecs = 0; > + rq_for_each_bvec(bv, req, rq_iter) > + nr_bvecs++; > + if (!nr_bvecs) > + goto out; > + > + node = io_rsrc_node_alloc(ctx, IORING_RSRC_KBUFFER); > + if (!node) > + goto out; > + node->buf = NULL; > + > + imu = kvmalloc(struct_size(imu, bvec, nr_bvecs), GFP_NOIO); > + if (!imu) > + goto out; > + > + imu->ubuf = 0; > + imu->len = 0; > + if (req->bio != req->biotail) { > + int idx = 0; > + > + offset = 0; > + rq_for_each_bvec(bv, req, rq_iter) { > + imu->bvec[idx++] = bv; > + imu->len += bv.bv_len; > + } > + } else { > + struct bio *bio = req->bio; > + > + offset = bio->bi_iter.bi_bvec_done; > + imu->bvec[0] = *__bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter); > + imu->len = imu->bvec[0].bv_len; > + } > + imu->nr_bvecs = nr_bvecs; > + imu->folio_shift = PAGE_SHIFT; > + refcount_set(&imu->refs, 1); One big problem is how to initialize the reference count, because this buffer need to be used in the following more than one request. Without one perfect counter, the buffer won't be freed in the exact time without extra OP. I think the reference should be in `node` which need to be live if any consumer OP isn't completed. > + node->buf = imu; > + node->kbuf_fn = kbuf_fn; > + return node; Also this function needs to register the buffer to table with one pre-defined buf index, then the following request can use it by the way of io_prep_rw_fixed(). If OP dependency can be avoided, I think this approach is fine, otherwise I still suggest sqe group. Not only performance, but application becomes too complicated. We also we need to provide ->prep() callback for uring_cmd driver, so that io_rsrc_map_request() can be called by driver in ->prep(), meantime `io_ring_ctx` and `io_rsrc_node` need to be visible for driver. What do you think of these kind of changes? thanks, Ming
On 10/29/24 8:03 PM, Ming Lei wrote: > On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote: >> On 10/29/24 2:06 PM, Jens Axboe wrote: >>> On 10/29/24 1:18 PM, Jens Axboe wrote: >>>> Now, this implementation requires a user buffer, and as far as I'm told, >>>> you currently have kernel buffers on the ublk side. There's absolutely >>>> no reason why kernel buffers cannot work, we'd most likely just need to >>>> add a IORING_RSRC_KBUFFER type to handle that. My question here is how >>>> hard is this requirement? Reason I ask is that it's much simpler to work >>>> with userspace buffers. Yes the current implementation maps them >>>> everytime, we could certainly change that, however I don't see this >>>> being an issue. It's really no different than O_DIRECT, and you only >>>> need to map them once for a read + whatever number of writes you'd need >>>> to do. If a 'tag' is provided for LOCAL_BUF, it'll post a CQE whenever >>>> that buffer is unmapped. This is a notification for the application that >>>> it's done using the buffer. For a pure kernel buffer, we'd either need >>>> to be able to reference it (so that we KNOW it's not going away) and/or >>>> have a callback associated with the buffer. >>> >>> Just to expand on this - if a kernel buffer is absolutely required, for >>> example if you're inheriting pages from the page cache or other >>> locations you cannot control, we would need to add something ala the >>> below: >> >> Here's a more complete one, but utterly untested. But it does the same >> thing, mapping a struct request, but it maps it to an io_rsrc_node which >> in turn has an io_mapped_ubuf in it. Both BUFFER and KBUFFER use the >> same type, only the destruction is different. Then the callback provided >> needs to do something ala: >> >> struct io_mapped_ubuf *imu = node->buf; >> >> if (imu && refcount_dec_and_test(&imu->refs)) >> kvfree(imu); >> >> when it's done with the imu. Probably an rsrc helper should just be done >> for that, but those are details. >> >> diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c >> index 9621ba533b35..050868a4c9f1 100644 >> --- a/io_uring/rsrc.c >> +++ b/io_uring/rsrc.c >> @@ -8,6 +8,8 @@ >> #include <linux/nospec.h> >> #include <linux/hugetlb.h> >> #include <linux/compat.h> >> +#include <linux/bvec.h> >> +#include <linux/blk-mq.h> >> #include <linux/io_uring.h> >> >> #include <uapi/linux/io_uring.h> >> @@ -474,6 +476,9 @@ void io_free_rsrc_node(struct io_rsrc_node *node) >> if (node->buf) >> io_buffer_unmap(node->ctx, node); >> break; >> + case IORING_RSRC_KBUFFER: >> + node->kbuf_fn(node); >> + break; > > Here 'node' is freed later, and it may not work because ->imu is bound > with node. Not sure why this matters? imu can be bound to any node (and has a separate ref), but the node will remain for as long as the submission runs. It has to, because the last reference is put when submission of all requests in that series ends. >> @@ -1070,6 +1075,65 @@ int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg) >> return ret; >> } >> >> +struct io_rsrc_node *io_rsrc_map_request(struct io_ring_ctx *ctx, >> + struct request *req, >> + void (*kbuf_fn)(struct io_rsrc_node *)) >> +{ >> + struct io_mapped_ubuf *imu = NULL; >> + struct io_rsrc_node *node = NULL; >> + struct req_iterator rq_iter; >> + unsigned int offset; >> + struct bio_vec bv; >> + int nr_bvecs; >> + >> + if (!bio_has_data(req->bio)) >> + goto out; >> + >> + nr_bvecs = 0; >> + rq_for_each_bvec(bv, req, rq_iter) >> + nr_bvecs++; >> + if (!nr_bvecs) >> + goto out; >> + >> + node = io_rsrc_node_alloc(ctx, IORING_RSRC_KBUFFER); >> + if (!node) >> + goto out; >> + node->buf = NULL; >> + >> + imu = kvmalloc(struct_size(imu, bvec, nr_bvecs), GFP_NOIO); >> + if (!imu) >> + goto out; >> + >> + imu->ubuf = 0; >> + imu->len = 0; >> + if (req->bio != req->biotail) { >> + int idx = 0; >> + >> + offset = 0; >> + rq_for_each_bvec(bv, req, rq_iter) { >> + imu->bvec[idx++] = bv; >> + imu->len += bv.bv_len; >> + } >> + } else { >> + struct bio *bio = req->bio; >> + >> + offset = bio->bi_iter.bi_bvec_done; >> + imu->bvec[0] = *__bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter); >> + imu->len = imu->bvec[0].bv_len; >> + } >> + imu->nr_bvecs = nr_bvecs; >> + imu->folio_shift = PAGE_SHIFT; >> + refcount_set(&imu->refs, 1); > > One big problem is how to initialize the reference count, because this > buffer need to be used in the following more than one request. Without > one perfect counter, the buffer won't be freed in the exact time without > extra OP. Each request that uses the node, will grab a reference to the node. The node holds a reference to the buffer. So at least as the above works, the buf will be put when submission ends, as that puts the node and subsequently the one reference the imu has by default. It'll outlast any of the requests that use it during submission, and there cannot be any other users of it as it isn't discoverable outside of that. > I think the reference should be in `node` which need to be live if any > consumer OP isn't completed. That is how it works... io_req_assign_rsrc_node() will assign a node to a request, which will be there until the request completes. >> + node->buf = imu; >> + node->kbuf_fn = kbuf_fn; >> + return node; > > Also this function needs to register the buffer to table with one > pre-defined buf index, then the following request can use it by > the way of io_prep_rw_fixed(). It should not register it with the table, the whole point is to keep this node only per-submission discoverable. If you're grabbing random request pages, then it very much is a bit finicky and needs to be of limited scope. Each request type would need to support it. For normal read/write, I'd suggest just adding IORING_OP_READ_LOCAL and WRITE_LOCAL to do that. > If OP dependency can be avoided, I think this approach is fine, > otherwise I still suggest sqe group. Not only performance, but > application becomes too complicated. You could avoid the OP dependency with just a flag, if you really wanted to. But I'm not sure it makes a lot of sense. And it's a hell of a lot simpler than the sqe group scheme, which I'm a bit worried about as it's a bit complicated in how deep it needs to go in the code. This one stands alone, so I'd strongly encourage we pursue this a bit further and iron out the kinks. Maybe it won't work in the end, I don't know, but it seems pretty promising and it's soooo much simpler. > We also we need to provide ->prep() callback for uring_cmd driver, so > that io_rsrc_map_request() can be called by driver in ->prep(), > meantime `io_ring_ctx` and `io_rsrc_node` need to be visible for driver. > What do you think of these kind of changes? io_ring_ctx is already visible in the normal system headers, io_rsrc_node we certainly could make visible. That's not a big deal. It makes a lot more sense to export than some of the other stuff we have in there! As long as it's all nicely handled by helpers, then we'd be fine.
On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote: > On 10/29/24 8:03 PM, Ming Lei wrote: > > On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote: > >> On 10/29/24 2:06 PM, Jens Axboe wrote: > >>> On 10/29/24 1:18 PM, Jens Axboe wrote: > >>>> Now, this implementation requires a user buffer, and as far as I'm told, > >>>> you currently have kernel buffers on the ublk side. There's absolutely > >>>> no reason why kernel buffers cannot work, we'd most likely just need to > >>>> add a IORING_RSRC_KBUFFER type to handle that. My question here is how > >>>> hard is this requirement? Reason I ask is that it's much simpler to work > >>>> with userspace buffers. Yes the current implementation maps them > >>>> everytime, we could certainly change that, however I don't see this > >>>> being an issue. It's really no different than O_DIRECT, and you only > >>>> need to map them once for a read + whatever number of writes you'd need > >>>> to do. If a 'tag' is provided for LOCAL_BUF, it'll post a CQE whenever > >>>> that buffer is unmapped. This is a notification for the application that > >>>> it's done using the buffer. For a pure kernel buffer, we'd either need > >>>> to be able to reference it (so that we KNOW it's not going away) and/or > >>>> have a callback associated with the buffer. > >>> > >>> Just to expand on this - if a kernel buffer is absolutely required, for > >>> example if you're inheriting pages from the page cache or other > >>> locations you cannot control, we would need to add something ala the > >>> below: > >> > >> Here's a more complete one, but utterly untested. But it does the same > >> thing, mapping a struct request, but it maps it to an io_rsrc_node which > >> in turn has an io_mapped_ubuf in it. Both BUFFER and KBUFFER use the > >> same type, only the destruction is different. Then the callback provided > >> needs to do something ala: > >> > >> struct io_mapped_ubuf *imu = node->buf; > >> > >> if (imu && refcount_dec_and_test(&imu->refs)) > >> kvfree(imu); > >> > >> when it's done with the imu. Probably an rsrc helper should just be done > >> for that, but those are details. > >> > >> diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c > >> index 9621ba533b35..050868a4c9f1 100644 > >> --- a/io_uring/rsrc.c > >> +++ b/io_uring/rsrc.c > >> @@ -8,6 +8,8 @@ > >> #include <linux/nospec.h> > >> #include <linux/hugetlb.h> > >> #include <linux/compat.h> > >> +#include <linux/bvec.h> > >> +#include <linux/blk-mq.h> > >> #include <linux/io_uring.h> > >> > >> #include <uapi/linux/io_uring.h> > >> @@ -474,6 +476,9 @@ void io_free_rsrc_node(struct io_rsrc_node *node) > >> if (node->buf) > >> io_buffer_unmap(node->ctx, node); > >> break; > >> + case IORING_RSRC_KBUFFER: > >> + node->kbuf_fn(node); > >> + break; > > > > Here 'node' is freed later, and it may not work because ->imu is bound > > with node. > > Not sure why this matters? imu can be bound to any node (and has a > separate ref), but the node will remain for as long as the submission > runs. It has to, because the last reference is put when submission of > all requests in that series ends. Fine, how is the imu found from OP? Not see related code to add the allocated node into submission_state or ctx->buf_table. io_rsrc_node_lookup() needs to find the buffer any way, right? > > >> @@ -1070,6 +1075,65 @@ int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg) > >> return ret; > >> } > >> > >> +struct io_rsrc_node *io_rsrc_map_request(struct io_ring_ctx *ctx, > >> + struct request *req, > >> + void (*kbuf_fn)(struct io_rsrc_node *)) > >> +{ > >> + struct io_mapped_ubuf *imu = NULL; > >> + struct io_rsrc_node *node = NULL; > >> + struct req_iterator rq_iter; > >> + unsigned int offset; > >> + struct bio_vec bv; > >> + int nr_bvecs; > >> + > >> + if (!bio_has_data(req->bio)) > >> + goto out; > >> + > >> + nr_bvecs = 0; > >> + rq_for_each_bvec(bv, req, rq_iter) > >> + nr_bvecs++; > >> + if (!nr_bvecs) > >> + goto out; > >> + > >> + node = io_rsrc_node_alloc(ctx, IORING_RSRC_KBUFFER); > >> + if (!node) > >> + goto out; > >> + node->buf = NULL; > >> + > >> + imu = kvmalloc(struct_size(imu, bvec, nr_bvecs), GFP_NOIO); > >> + if (!imu) > >> + goto out; > >> + > >> + imu->ubuf = 0; > >> + imu->len = 0; > >> + if (req->bio != req->biotail) { > >> + int idx = 0; > >> + > >> + offset = 0; > >> + rq_for_each_bvec(bv, req, rq_iter) { > >> + imu->bvec[idx++] = bv; > >> + imu->len += bv.bv_len; > >> + } > >> + } else { > >> + struct bio *bio = req->bio; > >> + > >> + offset = bio->bi_iter.bi_bvec_done; > >> + imu->bvec[0] = *__bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter); > >> + imu->len = imu->bvec[0].bv_len; > >> + } > >> + imu->nr_bvecs = nr_bvecs; > >> + imu->folio_shift = PAGE_SHIFT; > >> + refcount_set(&imu->refs, 1); > > > > One big problem is how to initialize the reference count, because this > > buffer need to be used in the following more than one request. Without > > one perfect counter, the buffer won't be freed in the exact time without > > extra OP. > > Each request that uses the node, will grab a reference to the node. The > node holds a reference to the buffer. So at least as the above works, > the buf will be put when submission ends, as that puts the node and > subsequently the one reference the imu has by default. It'll outlast any > of the requests that use it during submission, and there cannot be any > other users of it as it isn't discoverable outside of that. OK, if the node/buffer is only looked up in ->prep(), this way works. > > > I think the reference should be in `node` which need to be live if any > > consumer OP isn't completed. > > That is how it works... io_req_assign_rsrc_node() will assign a node to > a request, which will be there until the request completes. > > >> + node->buf = imu; > >> + node->kbuf_fn = kbuf_fn; > >> + return node; > > > > Also this function needs to register the buffer to table with one > > pre-defined buf index, then the following request can use it by > > the way of io_prep_rw_fixed(). > > It should not register it with the table, the whole point is to keep > this node only per-submission discoverable. If you're grabbing random > request pages, then it very much is a bit finicky and needs to be of > limited scope. There can be more than 1 buffer uses in single submission, can you share how OP finds the specific buffer with ->buf_index from submission state? This part is missed in your patch. > > Each request type would need to support it. For normal read/write, I'd > suggest just adding IORING_OP_READ_LOCAL and WRITE_LOCAL to do that. > > > If OP dependency can be avoided, I think this approach is fine, > > otherwise I still suggest sqe group. Not only performance, but > > application becomes too complicated. > > You could avoid the OP dependency with just a flag, if you really wanted > to. But I'm not sure it makes a lot of sense. And it's a hell of a lot Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra syscall makes application too complicated, and IO latency is increased. > simpler than the sqe group scheme, which I'm a bit worried about as it's > a bit complicated in how deep it needs to go in the code. This one > stands alone, so I'd strongly encourage we pursue this a bit further and > iron out the kinks. Maybe it won't work in the end, I don't know, but it > seems pretty promising and it's soooo much simpler. If buffer register and lookup are always done in ->prep(), OP dependency may be avoided. > > > We also we need to provide ->prep() callback for uring_cmd driver, so > > that io_rsrc_map_request() can be called by driver in ->prep(), > > meantime `io_ring_ctx` and `io_rsrc_node` need to be visible for driver. > > What do you think of these kind of changes? > > io_ring_ctx is already visible in the normal system headers, > io_rsrc_node we certainly could make visible. That's not a big deal. It > makes a lot more sense to export than some of the other stuff we have in > there! As long as it's all nicely handled by helpers, then we'd be fine. OK. thanks, Ming
On Wed, Oct 30, 2024 at 11:08:16AM +0800, Ming Lei wrote: > On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote: ... > > You could avoid the OP dependency with just a flag, if you really wanted > > to. But I'm not sure it makes a lot of sense. And it's a hell of a lot > > Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra > syscall makes application too complicated, and IO latency is increased. > > > simpler than the sqe group scheme, which I'm a bit worried about as it's > > a bit complicated in how deep it needs to go in the code. This one > > stands alone, so I'd strongly encourage we pursue this a bit further and > > iron out the kinks. Maybe it won't work in the end, I don't know, but it > > seems pretty promising and it's soooo much simpler. > > If buffer register and lookup are always done in ->prep(), OP dependency > may be avoided. Even all buffer register and lookup are done in ->prep(), OP dependency still can't be avoided completely, such as: 1) two local buffers for sending to two sockets 2) group 1: IORING_OP_LOCAL_KBUF1 & [send(sock1), send(sock2)] 3) group 2: IORING_OP_LOCAL_KBUF2 & [send(sock1), send(sock2)] group 1 and group 2 needs to be linked, but inside each group, the two sends may be submitted in parallel. Thanks, Ming
On 10/29/24 9:08 PM, Ming Lei wrote: > On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote: >> On 10/29/24 8:03 PM, Ming Lei wrote: >>> On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote: >>>> On 10/29/24 2:06 PM, Jens Axboe wrote: >>>>> On 10/29/24 1:18 PM, Jens Axboe wrote: >>>>>> Now, this implementation requires a user buffer, and as far as I'm told, >>>>>> you currently have kernel buffers on the ublk side. There's absolutely >>>>>> no reason why kernel buffers cannot work, we'd most likely just need to >>>>>> add a IORING_RSRC_KBUFFER type to handle that. My question here is how >>>>>> hard is this requirement? Reason I ask is that it's much simpler to work >>>>>> with userspace buffers. Yes the current implementation maps them >>>>>> everytime, we could certainly change that, however I don't see this >>>>>> being an issue. It's really no different than O_DIRECT, and you only >>>>>> need to map them once for a read + whatever number of writes you'd need >>>>>> to do. If a 'tag' is provided for LOCAL_BUF, it'll post a CQE whenever >>>>>> that buffer is unmapped. This is a notification for the application that >>>>>> it's done using the buffer. For a pure kernel buffer, we'd either need >>>>>> to be able to reference it (so that we KNOW it's not going away) and/or >>>>>> have a callback associated with the buffer. >>>>> >>>>> Just to expand on this - if a kernel buffer is absolutely required, for >>>>> example if you're inheriting pages from the page cache or other >>>>> locations you cannot control, we would need to add something ala the >>>>> below: >>>> >>>> Here's a more complete one, but utterly untested. But it does the same >>>> thing, mapping a struct request, but it maps it to an io_rsrc_node which >>>> in turn has an io_mapped_ubuf in it. Both BUFFER and KBUFFER use the >>>> same type, only the destruction is different. Then the callback provided >>>> needs to do something ala: >>>> >>>> struct io_mapped_ubuf *imu = node->buf; >>>> >>>> if (imu && refcount_dec_and_test(&imu->refs)) >>>> kvfree(imu); >>>> >>>> when it's done with the imu. Probably an rsrc helper should just be done >>>> for that, but those are details. >>>> >>>> diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c >>>> index 9621ba533b35..050868a4c9f1 100644 >>>> --- a/io_uring/rsrc.c >>>> +++ b/io_uring/rsrc.c >>>> @@ -8,6 +8,8 @@ >>>> #include <linux/nospec.h> >>>> #include <linux/hugetlb.h> >>>> #include <linux/compat.h> >>>> +#include <linux/bvec.h> >>>> +#include <linux/blk-mq.h> >>>> #include <linux/io_uring.h> >>>> >>>> #include <uapi/linux/io_uring.h> >>>> @@ -474,6 +476,9 @@ void io_free_rsrc_node(struct io_rsrc_node *node) >>>> if (node->buf) >>>> io_buffer_unmap(node->ctx, node); >>>> break; >>>> + case IORING_RSRC_KBUFFER: >>>> + node->kbuf_fn(node); >>>> + break; >>> >>> Here 'node' is freed later, and it may not work because ->imu is bound >>> with node. >> >> Not sure why this matters? imu can be bound to any node (and has a >> separate ref), but the node will remain for as long as the submission >> runs. It has to, because the last reference is put when submission of >> all requests in that series ends. > > Fine, how is the imu found from OP? Not see related code to add the > allocated node into submission_state or ctx->buf_table. Just didn't do that, see the POC test patch I did for rw for just grabbing the fixed one in io_submit_state. Really depends on how many we'd need - if it's just 1 per submit, then whatever I had would work and the OP just needs to know to look there. > io_rsrc_node_lookup() needs to find the buffer any way, right? That's for table lookup, for the POC there's just the one node hence nothing really to lookup. It's either rsrc_empty_node, or a valid node. >>> I think the reference should be in `node` which need to be live if any >>> consumer OP isn't completed. >> >> That is how it works... io_req_assign_rsrc_node() will assign a node to >> a request, which will be there until the request completes. >> >>>> + node->buf = imu; >>>> + node->kbuf_fn = kbuf_fn; >>>> + return node; >>> >>> Also this function needs to register the buffer to table with one >>> pre-defined buf index, then the following request can use it by >>> the way of io_prep_rw_fixed(). >> >> It should not register it with the table, the whole point is to keep >> this node only per-submission discoverable. If you're grabbing random >> request pages, then it very much is a bit finicky and needs to be of >> limited scope. > > There can be more than 1 buffer uses in single submission, can you share > how OP finds the specific buffer with ->buf_index from submission state? > This part is missed in your patch. If we need more than one, then yeah we'd need an index rather than just a single pointer. Doesn't really change the mechanics, you'd need to provide an index like with ->buf_index. It's not missed in the patch, it's really just a POC patch to show how it can be done, by no means a done solution! But we can certainly get it there. >> Each request type would need to support it. For normal read/write, I'd >> suggest just adding IORING_OP_READ_LOCAL and WRITE_LOCAL to do that. >> >>> If OP dependency can be avoided, I think this approach is fine, >>> otherwise I still suggest sqe group. Not only performance, but >>> application becomes too complicated. >> >> You could avoid the OP dependency with just a flag, if you really wanted >> to. But I'm not sure it makes a lot of sense. And it's a hell of a lot > > Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra > syscall makes application too complicated, and IO latency is increased. It's really not a big deal to prepare-and-submit the dependencies separately, but at the same time, I don't think it'd be a bad idea to support eg 2 local buffers per submit. Or whatever we need there. This is more from a usability point of view, because the rest of the machinery is so much more expensive than a single extra syscall that the latter is not goinbg to affect IO latencies at all.
On 10/29/24 10:11 PM, Ming Lei wrote: > On Wed, Oct 30, 2024 at 11:08:16AM +0800, Ming Lei wrote: >> On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote: > > ... > >>> You could avoid the OP dependency with just a flag, if you really wanted >>> to. But I'm not sure it makes a lot of sense. And it's a hell of a lot >> >> Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra >> syscall makes application too complicated, and IO latency is increased. >> >>> simpler than the sqe group scheme, which I'm a bit worried about as it's >>> a bit complicated in how deep it needs to go in the code. This one >>> stands alone, so I'd strongly encourage we pursue this a bit further and >>> iron out the kinks. Maybe it won't work in the end, I don't know, but it >>> seems pretty promising and it's soooo much simpler. >> >> If buffer register and lookup are always done in ->prep(), OP dependency >> may be avoided. > > Even all buffer register and lookup are done in ->prep(), OP dependency > still can't be avoided completely, such as: > > 1) two local buffers for sending to two sockets > > 2) group 1: IORING_OP_LOCAL_KBUF1 & [send(sock1), send(sock2)] > > 3) group 2: IORING_OP_LOCAL_KBUF2 & [send(sock1), send(sock2)] > > group 1 and group 2 needs to be linked, but inside each group, the two > sends may be submitted in parallel. That is where groups of course work, in that you can submit 2 groups and have each member inside each group run independently. But I do think we need to decouple the local buffer and group concepts entirely. For the first step, getting local buffers working with zero copy would be ideal, and then just live with the fact that group 1 needs to be submitted first and group 2 once the first ones are done. Once local buffers are done, we can look at doing the sqe grouping in a nice way. I do think it's a potentially powerful concept, but we're going to make a lot more progress on this issue if we carefully separate dependencies and get each of them done separately.
On Wed, Oct 30, 2024 at 07:20:48AM -0600, Jens Axboe wrote: > On 10/29/24 10:11 PM, Ming Lei wrote: > > On Wed, Oct 30, 2024 at 11:08:16AM +0800, Ming Lei wrote: > >> On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote: > > > > ... > > > >>> You could avoid the OP dependency with just a flag, if you really wanted > >>> to. But I'm not sure it makes a lot of sense. And it's a hell of a lot > >> > >> Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra > >> syscall makes application too complicated, and IO latency is increased. > >> > >>> simpler than the sqe group scheme, which I'm a bit worried about as it's > >>> a bit complicated in how deep it needs to go in the code. This one > >>> stands alone, so I'd strongly encourage we pursue this a bit further and > >>> iron out the kinks. Maybe it won't work in the end, I don't know, but it > >>> seems pretty promising and it's soooo much simpler. > >> > >> If buffer register and lookup are always done in ->prep(), OP dependency > >> may be avoided. > > > > Even all buffer register and lookup are done in ->prep(), OP dependency > > still can't be avoided completely, such as: > > > > 1) two local buffers for sending to two sockets > > > > 2) group 1: IORING_OP_LOCAL_KBUF1 & [send(sock1), send(sock2)] > > > > 3) group 2: IORING_OP_LOCAL_KBUF2 & [send(sock1), send(sock2)] > > > > group 1 and group 2 needs to be linked, but inside each group, the two > > sends may be submitted in parallel. > > That is where groups of course work, in that you can submit 2 groups and > have each member inside each group run independently. But I do think we > need to decouple the local buffer and group concepts entirely. For the > first step, getting local buffers working with zero copy would be ideal, > and then just live with the fact that group 1 needs to be submitted > first and group 2 once the first ones are done. IMHO, it is one _kernel_ zero copy(_performance_) feature, which often imply: - better performance expectation - no big change on existed application for using this feature Application developer is less interested in sort of crippled or immature feature, especially need big change on existed code logic(then two code paths need to maintain), with potential performance regression. With sqe group and REQ_F_GROUP_KBUF, application just needs lines of code change for using the feature, and it is pretty easy to evaluate the feature since no any extra logic change & no extra syscall/wait introduced. The whole patchset has been mature enough, unfortunately blocked without obvious reasons. > > Once local buffers are done, we can look at doing the sqe grouping in a > nice way. I do think it's a potentially powerful concept, but we're > going to make a lot more progress on this issue if we carefully separate > dependencies and get each of them done separately. One fundamental difference between local buffer and REQ_F_GROUP_KBUF is - local buffer has to be provided and used in ->prep() - REQ_F_GROUP_KBUF needs to be provided in ->issue() instead of ->prep() The only common code could be one buffer abstraction for OP to use, but still used differently, ->prep() vs. ->issue(). So it is hard to call it decouple, especially REQ_F_GROUP_KBUF has been simple enough, and the main change is to import it in OP code. Local buffer is one smart idea, but I hope the following things may be settled first: 1) is it generic enough to just allow to provide local buffer during ->prep()? - this way actually becomes sync & nowait IO, instead AIO, and has been one strong constraint from UAPI viewpoint. - Driver may need to wait until some data comes, then return & provide the buffer with data, and local buffer can't cover this case 2) is it allowed to call ->uring_cmd() from io_uring_cmd_prep()? If not, any idea to call into driver for leasing the kernel buffer to io_uring? 3) in OP code, how to differentiate normal userspace buffer select with local buffer? And how does OP know if normal buffer select or local kernel buffer should be used? Some OP may want to use normal buffer select instead of local buffer, others may want to use local buffer. 4) arbitrary numbers of local buffer needs to be supported, since IO often comes at batch, it shouldn't be hard to support it by adding xarray to submission state, what do you think of this added complexity? Without supporting arbitrary number of local buffers, performance can be just bad, it doesn't make sense as zc viewpoint. Meantime as number of local buffer is increased, more rsrc_node & imu allocation is introduced, this still may degrade perf a bit. 5) io_rsrc_node becomes part of interface between io_uring and driver for releasing the leased buffer, so extra data has to be added to `io_rsrc_node` for driver use. However, from above, the following can be concluded at least: - it isn't generic enough, #1, #3 - it still need sqe group - it is much more complicated than REQ_F_GROUP_KBUF only - it can't be more efficient Thanks, Ming
On 10/30/24 02:43, Jens Axboe wrote: > On 10/29/24 8:03 PM, Ming Lei wrote: >> On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote: >>> On 10/29/24 2:06 PM, Jens Axboe wrote: >>>> On 10/29/24 1:18 PM, Jens Axboe wrote: ... >>> + node->buf = imu; >>> + node->kbuf_fn = kbuf_fn; >>> + return node; >> >> Also this function needs to register the buffer to table with one >> pre-defined buf index, then the following request can use it by >> the way of io_prep_rw_fixed(). > > It should not register it with the table, the whole point is to keep > this node only per-submission discoverable. If you're grabbing random > request pages, then it very much is a bit finicky Registering it into the table has enough of design and flexibility merits: error handling, allowing any type of dependencies of requests by handling it in the user space, etc. > and needs to be of > limited scope. And I don't think we can force it, neither with limiting exposure to submission only nor with the Ming's group based approach. The user can always queue a request that will never complete and/or by using DEFER_TASKRUN and just not letting it run. In this sense it might be dangerous to block requests of an average system shared block device, but if it's fine with ublk it sounds like it should be fine for any of the aforementioned approaches. > Each request type would need to support it. For normal read/write, I'd > suggest just adding IORING_OP_READ_LOCAL and WRITE_LOCAL to do that.
On 10/30/24 8:53 PM, Ming Lei wrote: > On Wed, Oct 30, 2024 at 07:20:48AM -0600, Jens Axboe wrote: >> On 10/29/24 10:11 PM, Ming Lei wrote: >>> On Wed, Oct 30, 2024 at 11:08:16AM +0800, Ming Lei wrote: >>>> On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote: >>> >>> ... >>> >>>>> You could avoid the OP dependency with just a flag, if you really wanted >>>>> to. But I'm not sure it makes a lot of sense. And it's a hell of a lot >>>> >>>> Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra >>>> syscall makes application too complicated, and IO latency is increased. >>>> >>>>> simpler than the sqe group scheme, which I'm a bit worried about as it's >>>>> a bit complicated in how deep it needs to go in the code. This one >>>>> stands alone, so I'd strongly encourage we pursue this a bit further and >>>>> iron out the kinks. Maybe it won't work in the end, I don't know, but it >>>>> seems pretty promising and it's soooo much simpler. >>>> >>>> If buffer register and lookup are always done in ->prep(), OP dependency >>>> may be avoided. >>> >>> Even all buffer register and lookup are done in ->prep(), OP dependency >>> still can't be avoided completely, such as: >>> >>> 1) two local buffers for sending to two sockets >>> >>> 2) group 1: IORING_OP_LOCAL_KBUF1 & [send(sock1), send(sock2)] >>> >>> 3) group 2: IORING_OP_LOCAL_KBUF2 & [send(sock1), send(sock2)] >>> >>> group 1 and group 2 needs to be linked, but inside each group, the two >>> sends may be submitted in parallel. >> >> That is where groups of course work, in that you can submit 2 groups and >> have each member inside each group run independently. But I do think we >> need to decouple the local buffer and group concepts entirely. For the >> first step, getting local buffers working with zero copy would be ideal, >> and then just live with the fact that group 1 needs to be submitted >> first and group 2 once the first ones are done. > > IMHO, it is one _kernel_ zero copy(_performance_) feature, which often > imply: > > - better performance expectation > - no big change on existed application for using this feature For #2, really depends on what it is. But ideally, yes, agree. > Application developer is less interested in sort of crippled or > immature feature, especially need big change on existed code > logic(then two code paths need to maintain), with potential > performance regression. > > With sqe group and REQ_F_GROUP_KBUF, application just needs lines of > code change for using the feature, and it is pretty easy to evaluate > the feature since no any extra logic change & no extra syscall/wait > introduced. The whole patchset has been mature enough, unfortunately > blocked without obvious reasons. Let me tell you where I'm coming from. If you might recall, I originated the whole grouping idea. Didn't complete it, but it's essentially the same concept as REQ_F_GROUP_KBUF in that you have some dependents on a leader, and the dependents can run in parallel rather than being serialized by links. I'm obviously in favor of this concept, but I want to see it being done in such a way that it's actually something we can reason about and maintain. You want it for zero copy, which makes sense, but I also want to ensure it's a CLEAN implementation that doesn't have tangles in places it doesn't need to. You seem to be very hard to convince of making ANY changes at all. In your mind the whole thing is done, and it's being "blocked without obvious reason". It's not being blocked at all, I've been diligently trying to work with you recently on getting this done. I'm at least as interested as you in getting this work done. But I want you to work with me a bit on some items so we can get it into a shape where I'm happy with it, and I can maintain it going forward. So, please, rather than dig your heels in all the time, have an open mind on how we can accomplish some of these things. >> Once local buffers are done, we can look at doing the sqe grouping in a >> nice way. I do think it's a potentially powerful concept, but we're >> going to make a lot more progress on this issue if we carefully separate >> dependencies and get each of them done separately. > > One fundamental difference between local buffer and REQ_F_GROUP_KBUF is > > - local buffer has to be provided and used in ->prep() > - REQ_F_GROUP_KBUF needs to be provided in ->issue() instead of ->prep() It does not - the POC certainly did it in ->prep(), but all it really cares about is having the ring locked. ->prep() always has that, ->issue() _normally_ has that, unless it ends up in an io-wq punt. You can certainly do it in ->issue() and still have it be per-submit, the latter which I care about for safety reasons. This just means it has to be provided in the _first_ issue, and that IOSQE_ASYNC must not be set on the request. I think that restriction is fine, nobody should really be using IOSQE_ASYNC anyway. I think the original POC maybe did more harm than good in that it was too simplistic, and you seem too focused on the limits of that. So let me detail what it actually could look like. We have io_submit_state in io_ring_ctx. This is per-submit private data, it's initialized and flushed for each io_uring_enter(2) that submits requests. We have a registered file and buffer table, file_table and buf_table. These have life times that are dependent on the ring and registration/unregistration. We could have a local_table. This one should be setup by some register command, eg reserving X slots for that. At the end of submit, we'd flush this table, putting nodes in there. Requests can look at the table in either prep or issue, and find buffer nodes. If a request uses one of these, it grabs a ref and hence has it available until it puts it at IO completion time. When a single submit context is done, local_table is iterated (if any entries exist) and existing nodes cleared and put. That provides a similar table to buf_table, but with a lifetime of a submit. Just like local buf. Yes it would not be private to a single group, it'd be private to a submit which has potentially bigger scope, but that should not matter. That should give you exactly what you need, if you use IORING_RSRC_KBUFFER in the local_table. But it could even be used for IORING_RSRC_BUFFER as well, providing buffers for a single submit cycle as well. Rather than do something completely on the side with io_uring_kernel_buf, we can use io_rsrc_node and io_mapped_ubuf for this. Which goes back to my initial rant in this email - use EXISTING infrastructure for these things. A big part of why this isn't making progress is that a lot of things are done on the side rather than being integrated. Then you need extra io_kiocb members, where it really should just be using io_rsrc_node and get everything else for free. No need to do special checking and putting seperately, it's a resource node just like any other resource node we already support. > The only common code could be one buffer abstraction for OP to use, but > still used differently, ->prep() vs. ->issue(). With the prep vs issue thing not being an issue, then it sounds like we fully agree that a) it should be one buffer abstraction, and b) we already have the infrastructure for this. We just need to add IORING_RSRC_KBUFFER, which I already posted some POC code for. > So it is hard to call it decouple, especially REQ_F_GROUP_KBUF has been > simple enough, and the main change is to import it in OP code. > > Local buffer is one smart idea, but I hope the following things may be > settled first: > > 1) is it generic enough to just allow to provide local buffer during > ->prep()? > > - this way actually becomes sync & nowait IO, instead AIO, and has been > one strong constraint from UAPI viewpoint. > > - Driver may need to wait until some data comes, then return & provide > the buffer with data, and local buffer can't cover this case This should be moot now with the above explanation. > 2) is it allowed to call ->uring_cmd() from io_uring_cmd_prep()? If not, > any idea to call into driver for leasing the kernel buffer to io_uring? Ditto > 3) in OP code, how to differentiate normal userspace buffer select with > local buffer? And how does OP know if normal buffer select or local > kernel buffer should be used? Some OP may want to use normal buffer > select instead of local buffer, others may want to use local buffer. Yes this is a key question we need to figure out. Right now using fixed buffers needs to set ->buf_index, and the OP needs to know aboout it. let's not confuse it with buffer select, IOSQE_BUFFER_SELECT, as that's for provided buffers. > 4) arbitrary numbers of local buffer needs to be supported, since IO > often comes at batch, it shouldn't be hard to support it by adding xarray > to submission state, what do you think of this added complexity? Without > supporting arbitrary number of local buffers, performance can be just > bad, it doesn't make sense as zc viewpoint. Meantime as number of local > buffer is increased, more rsrc_node & imu allocation is introduced, this > still may degrade perf a bit. That's fine, we just need to reserve space for them upfront. I don't like the xarray idea, as: 1) xarray does internal locking, which we don't need here 2) The existing io_rsrc_data table is what is being used for io_rsrc_node management now. This would introduce another method for that. I do want to ensure that io_submit_state_finish() is still low overhead, and using an xarray would be more expensive than just doing: if (ctx->local_table.nr) flush_nodes(); as you'd need to always setup an iterator. But this isn't really THAT important. The benefit of using an xarray would be that we'd get flexible storing of members without needing pre-registration, obviously. > 5) io_rsrc_node becomes part of interface between io_uring and driver > for releasing the leased buffer, so extra data has to be > added to `io_rsrc_node` for driver use. That's fine imho. The KBUFFER addition already adds the callback, we can add a data thing too. The kernel you based your code on has an io_rsrc_node that is 48 bytes in size, and my current tree has one where it's 32 bytes in size after the table rework. If we have to add 2x8b to support this, that's NOT a big deal and we just end up with a node that's the same size as before. And we get rid of this odd intermediate io_uring_kernel_buf struct, which is WAY bigger anyway, and requires TWO allocations where the existing io_mapped_ubuf embeds the bvec. I'd argue two vs one allocs is a much bigger deal for performance reasons. As a final note, one thing I mentioned in an earlier email is that part of the issue here is that there are several things that need ironing out, and they are actually totally separate. One is the buffer side, which this email mostly deals with, the other one is the grouping concept. For the sqe grouping, one sticking point has been using that last sqe->flags bit. I was thinking about this last night, and what if we got away from using a flag entirely? At some point io_uring needs to deal with this flag limitation, but it's arguably a large effort, and I'd greatly prefer not having to paper over it to shove in grouped SQEs. So... what if we simply added a new OP, IORING_OP_GROUP_START, or something like that. Hence instead of having a designated group leader bit for an OP, eg: sqe = io_uring_get_sqe(ring); io_uring_prep_read(sqe, ...); sqe->flags |= IOSQE_GROUP_BIT; you'd do: sqe = io_uring_get_sqe(ring); io_uring_prep_group_start(sqe, ...); sqe->flags |= IOSQE_IO_LINK; sqe = io_uring_get_sqe(ring); io_uring_prep_read(sqe, ...); which would be the equivalent transformation - the read would be the group leader as it's the first member of that chain. The read should set IOSQE_IO_LINK for as long as it has members. The members in that group would NOT be serialized. They would use IOSQE_IO_LINK purely to be part of that group, but IOSQE_IO_LINK would not cause them to be serialized. Hence the link just implies membership, not ordering within the group. This removes the flag issue, with the sligth caveat that IOSQE_IO_LINK has a different meaning inside the group. Maybe you'd need a GROUP_END op as well, so you could potentially terminate the group. Or maybe you'd just rely on the usual semantics, which is "first one that doesn't have IOSQE_IO_LINK sets marks the end of the group", which is how linked chains work right now too. The whole grouping wouldn't change at all, it's just a different way of marking what constitutes a group that doesn't run afoul of the whole flag limitation thing. Just an idea!
On 10/31/24 02:53, Ming Lei wrote: > On Wed, Oct 30, 2024 at 07:20:48AM -0600, Jens Axboe wrote: >> On 10/29/24 10:11 PM, Ming Lei wrote: >>> On Wed, Oct 30, 2024 at 11:08:16AM +0800, Ming Lei wrote: >>>> On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote: >>> >>> ... >>> >>>>> You could avoid the OP dependency with just a flag, if you really wanted >>>>> to. But I'm not sure it makes a lot of sense. And it's a hell of a lot >>>> >>>> Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra >>>> syscall makes application too complicated, and IO latency is increased. >>>> >>>>> simpler than the sqe group scheme, which I'm a bit worried about as it's >>>>> a bit complicated in how deep it needs to go in the code. This one >>>>> stands alone, so I'd strongly encourage we pursue this a bit further and >>>>> iron out the kinks. Maybe it won't work in the end, I don't know, but it >>>>> seems pretty promising and it's soooo much simpler. >>>> >>>> If buffer register and lookup are always done in ->prep(), OP dependency >>>> may be avoided. >>> >>> Even all buffer register and lookup are done in ->prep(), OP dependency >>> still can't be avoided completely, such as: >>> >>> 1) two local buffers for sending to two sockets >>> >>> 2) group 1: IORING_OP_LOCAL_KBUF1 & [send(sock1), send(sock2)] >>> >>> 3) group 2: IORING_OP_LOCAL_KBUF2 & [send(sock1), send(sock2)] >>> >>> group 1 and group 2 needs to be linked, but inside each group, the two >>> sends may be submitted in parallel. >> >> That is where groups of course work, in that you can submit 2 groups and >> have each member inside each group run independently. But I do think we >> need to decouple the local buffer and group concepts entirely. For the >> first step, getting local buffers working with zero copy would be ideal, >> and then just live with the fact that group 1 needs to be submitted >> first and group 2 once the first ones are done. > > IMHO, it is one _kernel_ zero copy(_performance_) feature, which often imply: > > - better performance expectation > - no big change on existed application for using this feature Yes, the feature doesn't make sense without appropriate performance wins, but I outright disagree with "there should be no big uapi changes to use it". It might be nice if the user doesn't have to change anything, but I find it of lower priority than performance, clarity of the overall design and so on. > Application developer is less interested in sort of crippled or immature > feature, especially need big change on existed code logic(then two code paths > need to maintain), with potential performance regression. Then we just need avoid creating a "crippled" feature, I believe everyone is on the same page here. As for maturity, features don't get there at the same pace, extra layers of complexity definitely do make getting into shape much slower. You can argue you like how the uapi turned to be, though I believe there are still rough edges if we consider it a generic feature, but the kernel side of things is fairly complicated. > With sqe group and REQ_F_GROUP_KBUF, application just needs lines of > code change for using the feature, and it is pretty easy to evaluate > the feature since no any extra logic change & no extra syscall/wait > introduced. The whole patchset has been mature enough, unfortunately > blocked without obvious reasons. > >> >> Once local buffers are done, we can look at doing the sqe grouping in a >> nice way. I do think it's a potentially powerful concept, but we're >> going to make a lot more progress on this issue if we carefully separate >> dependencies and get each of them done separately. > > One fundamental difference between local buffer and REQ_F_GROUP_KBUF is > > - local buffer has to be provided and used in ->prep() > - REQ_F_GROUP_KBUF needs to be provided in ->issue() instead of ->prep() I'd need to take a look at that local buffer patch to say, but likely there is a way to shift all of it to ->issue(), which would be more aligned with fixed file resolution and how links use it.
On 10/31/24 7:25 AM, Pavel Begunkov wrote: > On 10/30/24 02:43, Jens Axboe wrote: >> On 10/29/24 8:03 PM, Ming Lei wrote: >>> On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote: >>>> On 10/29/24 2:06 PM, Jens Axboe wrote: >>>>> On 10/29/24 1:18 PM, Jens Axboe wrote: > ... >>>> + node->buf = imu; >>>> + node->kbuf_fn = kbuf_fn; >>>> + return node; >>> >>> Also this function needs to register the buffer to table with one >>> pre-defined buf index, then the following request can use it by >>> the way of io_prep_rw_fixed(). >> >> It should not register it with the table, the whole point is to keep >> this node only per-submission discoverable. If you're grabbing random >> request pages, then it very much is a bit finicky > > Registering it into the table has enough of design and flexibility > merits: error handling, allowing any type of dependencies of requests > by handling it in the user space, etc. Right, but it has to be a special table. See my lengthier reply to Ming. The initial POC did install it into a table, it's just a one-slot table, io_submit_state. I think the right approach is to have an actual struct io_rsrc_data local_table in the ctx, with refs put at the end of submit. Same kind of concept, just allows for more entries (potentially), with the same requirement that nodes get put when submit ends. IOW, requests need to find it within the same submit. Obviously you would not NEED to do that, but if the use case is grabbing bvecs out of a request, then it very much should not be discoverable past the initial assignments within that submit scope. >> and needs to be of >> limited scope. > > And I don't think we can force it, neither with limiting exposure to > submission only nor with the Ming's group based approach. The user can > always queue a request that will never complete and/or by using > DEFER_TASKRUN and just not letting it run. In this sense it might be > dangerous to block requests of an average system shared block device, > but if it's fine with ublk it sounds like it should be fine for any of > the aforementioned approaches. As long as the resource remains valid until the last put of the node, then it should be OK. Yes the application can mess things up in terms of latency if it uses one of these bufs for eg a read on a pipe that never gets any data, but the data will remain valid regardless. And that's very much a "doctor it hurts when I..." case, it should not cause any safety issues. It'll just prevent progress for the other requests that are using that buffer, if they need the final put to happen before making progress.
Another option is that we fully stick with the per-group buffer concept, which could also work just fine with io_rsrc_node. If we stick with the OP_GROUP_START thing, then that op could setup group_buf thing that is local to that group. This is where an instantiated buffer would appear too, keeping it strictly local to that group. That avoids needing any kind of ring state for this, and the group_buf would be propagated from the group leader to the members. The group_buf lives until all members of the group are dead, at which point it's released. I forget if your grouping implementation mandated the same scheme I originally had, where the group leader completes last? If it does, then it's a natural thing to have the group_buf live for the duration of the group leader, and it can just be normal per-io_kiocb data at that point, nothing special needed there. As with the previous scheme, each request using one of these IORING_RSRC_KBUFFER nodes just assigns it like it would any other fixed resource node, and the normal completion path puts it.
On 10/31/24 14:29, Jens Axboe wrote: > On 10/31/24 7:25 AM, Pavel Begunkov wrote: >> On 10/30/24 02:43, Jens Axboe wrote: >>> On 10/29/24 8:03 PM, Ming Lei wrote: >>>> On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote: >>>>> On 10/29/24 2:06 PM, Jens Axboe wrote: >>>>>> On 10/29/24 1:18 PM, Jens Axboe wrote: >> ... >>>>> + node->buf = imu; >>>>> + node->kbuf_fn = kbuf_fn; >>>>> + return node; >>>> >>>> Also this function needs to register the buffer to table with one >>>> pre-defined buf index, then the following request can use it by >>>> the way of io_prep_rw_fixed(). >>> >>> It should not register it with the table, the whole point is to keep >>> this node only per-submission discoverable. If you're grabbing random >>> request pages, then it very much is a bit finicky >> >> Registering it into the table has enough of design and flexibility >> merits: error handling, allowing any type of dependencies of requests >> by handling it in the user space, etc. > > Right, but it has to be a special table. See my lengthier reply to Ming. Mind pointing the specific part? I read through the thread and didn't see why it _has_ to be a special table. And by "special" I assume you mean the property of it being cleaned up / flushed by the end of submission / syscall, right? > The initial POC did install it into a table, it's just a one-slot table, By "table" I actually mean anything that survives beyond the current syscall / submission and potentially can be used by requests submitted with another syscall. > io_submit_state. I think the right approach is to have an actual struct > io_rsrc_data local_table in the ctx, with refs put at the end of submit. > Same kind of concept, just allows for more entries (potentially), with > the same requirement that nodes get put when submit ends. IOW, requests > need to find it within the same submit. > > Obviously you would not NEED to do that, but if the use case is grabbing > bvecs out of a request, then it very much should not be discoverable > past the initial assignments within that submit scope. > >>> and needs to be of >>> limited scope. >> >> And I don't think we can force it, neither with limiting exposure to >> submission only nor with the Ming's group based approach. The user can >> always queue a request that will never complete and/or by using >> DEFER_TASKRUN and just not letting it run. In this sense it might be >> dangerous to block requests of an average system shared block device, >> but if it's fine with ublk it sounds like it should be fine for any of >> the aforementioned approaches. > > As long as the resource remains valid until the last put of the node, > then it should be OK. Yes the application can mess things up in terms of It should be fine in terms of buffers staying alive. The "dangerous" part I mentioned is about abuse of a shared resource, e.g. one container locking up all requests of a bdev so that another container can't do any IO, maybe even with an fs on top. Nevertheless, it's ublk, I don't think we need to concern about that much since io_uring is on the other side from normal user space. > latency if it uses one of these bufs for eg a read on a pipe that never > gets any data, but the data will remain valid regardless. And that's > very much a "doctor it hurts when I..." case, it should not cause any Right, I care about malicious abuse when it can affect other users, break isolation / fairness, etc., I'm saying that there is no difference between all the approaches in this aspect, and if so it should also be perfectly ok from the kernel's perspective to allow to leave a buffer in the table long term. If the user wants to screw itself and doesn't remove the buffer that's the user's choice to shoot itself in the leg. From this angle, that I look at the auto removal you add not as some security / etc. concern, but just as a QoL / performance feature so that the user doesn't need to remove the buffer by hand. FWIW, instead of having another table, we can just mark a sub range of the main buffer table to be cleared every time after submission, just like we separate auto slot allocation with ranges. > safety issues. It'll just prevent progress for the other requests that > are using that buffer, if they need the final put to happen before > making progress.
On 10/31/24 9:25 AM, Pavel Begunkov wrote: > On 10/31/24 14:29, Jens Axboe wrote: >> On 10/31/24 7:25 AM, Pavel Begunkov wrote: >>> On 10/30/24 02:43, Jens Axboe wrote: >>>> On 10/29/24 8:03 PM, Ming Lei wrote: >>>>> On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote: >>>>>> On 10/29/24 2:06 PM, Jens Axboe wrote: >>>>>>> On 10/29/24 1:18 PM, Jens Axboe wrote: >>> ... >>>>>> + node->buf = imu; >>>>>> + node->kbuf_fn = kbuf_fn; >>>>>> + return node; >>>>> >>>>> Also this function needs to register the buffer to table with one >>>>> pre-defined buf index, then the following request can use it by >>>>> the way of io_prep_rw_fixed(). >>>> >>>> It should not register it with the table, the whole point is to keep >>>> this node only per-submission discoverable. If you're grabbing random >>>> request pages, then it very much is a bit finicky >>> >>> Registering it into the table has enough of design and flexibility >>> merits: error handling, allowing any type of dependencies of requests >>> by handling it in the user space, etc. >> >> Right, but it has to be a special table. See my lengthier reply to Ming. > > Mind pointing the specific part? I read through the thread and didn't > see why it _has_ to be a special table. > And by "special" I assume you mean the property of it being cleaned up > / flushed by the end of submission / syscall, right? Right, that's all I mean, special in the sense that it isn't persistent. Nothing special about it otherwise. Maybe "separate table from buf_table" is a more accurate way to describe it. >> The initial POC did install it into a table, it's just a one-slot table, > > By "table" I actually mean anything that survives beyond the current > syscall / submission and potentially can be used by requests submitted > with another syscall. Obviously there's nothing special about the above mentioned table in the sense that it's a normal table, it just doesn't survive beyond the current submission. The reason why I like that approach is that it doesn't leave potentially iffy data in a table beyond that submission. If we don't own this data, it's merely borrowed from someone else, then special case must be taken around it. >> io_submit_state. I think the right approach is to have an actual struct >> io_rsrc_data local_table in the ctx, with refs put at the end of submit. >> Same kind of concept, just allows for more entries (potentially), with >> the same requirement that nodes get put when submit ends. IOW, requests >> need to find it within the same submit. >> >> Obviously you would not NEED to do that, but if the use case is grabbing >> bvecs out of a request, then it very much should not be discoverable >> past the initial assignments within that submit scope. >> >>>> and needs to be of >>>> limited scope. >>> >>> And I don't think we can force it, neither with limiting exposure to >>> submission only nor with the Ming's group based approach. The user can >>> always queue a request that will never complete and/or by using >>> DEFER_TASKRUN and just not letting it run. In this sense it might be >>> dangerous to block requests of an average system shared block device, >>> but if it's fine with ublk it sounds like it should be fine for any of >>> the aforementioned approaches. >> >> As long as the resource remains valid until the last put of the node, >> then it should be OK. Yes the application can mess things up in terms of > > It should be fine in terms of buffers staying alive. The "dangerous" > part I mentioned is about abuse of a shared resource, e.g. one > container locking up all requests of a bdev so that another container > can't do any IO, maybe even with an fs on top. Nevertheless, it's ublk, > I don't think we need to concern about that much since io_uring is > on the other side from normal user space. If you leave it in the table, then you can no longer rely on the final put being the callback driver. Maybe this is fine, but then it needs some other mechanism for this. >> latency if it uses one of these bufs for eg a read on a pipe that never >> gets any data, but the data will remain valid regardless. And that's >> very much a "doctor it hurts when I..." case, it should not cause any > > Right, I care about malicious abuse when it can affect other users, > break isolation / fairness, etc., I'm saying that there is no > difference between all the approaches in this aspect, and if so > it should also be perfectly ok from the kernel's perspective to allow > to leave a buffer in the table long term. If the user wants to screw > itself and doesn't remove the buffer that's the user's choice to > shoot itself in the leg. > > From this angle, that I look at the auto removal you add not as some > security / etc. concern, but just as a QoL / performance feature so > that the user doesn't need to remove the buffer by hand. > > FWIW, instead of having another table, we can just mark a sub range > of the main buffer table to be cleared every time after submission, > just like we separate auto slot allocation with ranges. I did consider that idea too, mainly from the perspective of then not needing any kind of special OP or OP support to grab one of these buffers, it'd just use the normal table but in a separate range. Doesn't feel super clean, and does require some odd setup. Realistically, applications probabably use one or the other and not combined, so perhaps it's fine and the range is just the normal range. If they do mix the two, then yeah they would want to use separate ranges for them. Honestly don't care too deeply about that implementation detail, I care more about having these buffers be io_rsrc_node and using the general infrastructure for them. If we have to add IORING_RSRC_KBUFFER for them and a callback + data field to io_rsrc_node, that still a much better approach than having some other intermediate type which basically does the same thing, except it needs new fields to store it and new helpers to alloc/put it.
On 10/31/24 15:42, Jens Axboe wrote: > On 10/31/24 9:25 AM, Pavel Begunkov wrote: >> On 10/31/24 14:29, Jens Axboe wrote: >>> On 10/31/24 7:25 AM, Pavel Begunkov wrote: >>>> On 10/30/24 02:43, Jens Axboe wrote: >>>>> On 10/29/24 8:03 PM, Ming Lei wrote: >>>>>> On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote: >>>>>>> On 10/29/24 2:06 PM, Jens Axboe wrote: >>>>>>>> On 10/29/24 1:18 PM, Jens Axboe wrote: >>>> ... >>>>>>> + node->buf = imu; >>>>>>> + node->kbuf_fn = kbuf_fn; >>>>>>> + return node; >>>>>> >>>>>> Also this function needs to register the buffer to table with one >>>>>> pre-defined buf index, then the following request can use it by >>>>>> the way of io_prep_rw_fixed(). >>>>> >>>>> It should not register it with the table, the whole point is to keep >>>>> this node only per-submission discoverable. If you're grabbing random >>>>> request pages, then it very much is a bit finicky >>>> >>>> Registering it into the table has enough of design and flexibility >>>> merits: error handling, allowing any type of dependencies of requests >>>> by handling it in the user space, etc. >>> >>> Right, but it has to be a special table. See my lengthier reply to Ming. >> >> Mind pointing the specific part? I read through the thread and didn't >> see why it _has_ to be a special table. >> And by "special" I assume you mean the property of it being cleaned up >> / flushed by the end of submission / syscall, right? > > Right, that's all I mean, special in the sense that it isn't persistent. > Nothing special about it otherwise. Maybe "separate table from > buf_table" is a more accurate way to describe it. > >>> The initial POC did install it into a table, it's just a one-slot table, >> >> By "table" I actually mean anything that survives beyond the current >> syscall / submission and potentially can be used by requests submitted >> with another syscall. > > Obviously there's nothing special about the above mentioned table in the > sense that it's a normal table, it just doesn't survive beyond the > current submission. The reason why I like that approach is that it > doesn't leave potentially iffy data in a table beyond that submission. > If we don't own this data, it's merely borrowed from someone else, then > special case must be taken around it. So you're not trying to prevent some potential malicious use but rather making it a bit nicer for buggy users. I don't think I care much about that aspect and would sacrifice the property if it gives us anything good anywhere else. >>> io_submit_state. I think the right approach is to have an actual struct >>> io_rsrc_data local_table in the ctx, with refs put at the end of submit. >>> Same kind of concept, just allows for more entries (potentially), with >>> the same requirement that nodes get put when submit ends. IOW, requests >>> need to find it within the same submit. >>> >>> Obviously you would not NEED to do that, but if the use case is grabbing >>> bvecs out of a request, then it very much should not be discoverable >>> past the initial assignments within that submit scope. >>> >>>>> and needs to be of >>>>> limited scope. >>>> >>>> And I don't think we can force it, neither with limiting exposure to >>>> submission only nor with the Ming's group based approach. The user can >>>> always queue a request that will never complete and/or by using >>>> DEFER_TASKRUN and just not letting it run. In this sense it might be >>>> dangerous to block requests of an average system shared block device, >>>> but if it's fine with ublk it sounds like it should be fine for any of >>>> the aforementioned approaches. >>> >>> As long as the resource remains valid until the last put of the node, >>> then it should be OK. Yes the application can mess things up in terms of >> >> It should be fine in terms of buffers staying alive. The "dangerous" >> part I mentioned is about abuse of a shared resource, e.g. one >> container locking up all requests of a bdev so that another container >> can't do any IO, maybe even with an fs on top. Nevertheless, it's ublk, >> I don't think we need to concern about that much since io_uring is >> on the other side from normal user space. > > If you leave it in the table, then you can no longer rely on the final > put being the callback driver. Maybe this is fine, but then it needs > some other mechanism for this. Not sure I follow. The ->kbuf_fn is set by the driver, right? It'll always be called once the node is destroyed, in this sense the final destination is always the driver that leased the buffer. Or do you mean the final rsrc_node put? Not sure how that works considering requests can complete inside the submission as well as outlive it with the node reference. >>> latency if it uses one of these bufs for eg a read on a pipe that never >>> gets any data, but the data will remain valid regardless. And that's >>> very much a "doctor it hurts when I..." case, it should not cause any >> >> Right, I care about malicious abuse when it can affect other users, >> break isolation / fairness, etc., I'm saying that there is no >> difference between all the approaches in this aspect, and if so >> it should also be perfectly ok from the kernel's perspective to allow >> to leave a buffer in the table long term. If the user wants to screw >> itself and doesn't remove the buffer that's the user's choice to >> shoot itself in the leg. >> >> From this angle, that I look at the auto removal you add not as some >> security / etc. concern, but just as a QoL / performance feature so >> that the user doesn't need to remove the buffer by hand. >> >> FWIW, instead of having another table, we can just mark a sub range >> of the main buffer table to be cleared every time after submission, >> just like we separate auto slot allocation with ranges. > > I did consider that idea too, mainly from the perspective of then not > needing any kind of special OP or OP support to grab one of these > buffers, it'd just use the normal table but in a separate range. Doesn't > feel super clean, and does require some odd setup. Realistically, I feel like we don't even need to differentiate it from normal reg buffers in how it's used by other opcodes, cleaning the table is just a feature, I'd even argues an optional one. > applications probabably use one or the other and not combined, so > perhaps it's fine and the range is just the normal range. If they do mix > the two, then yeah they would want to use separate ranges for them. > > Honestly don't care too deeply about that implementation detail, I care > more about having these buffers be io_rsrc_node and using the general > infrastructure for them. If we have to add IORING_RSRC_KBUFFER for them > and a callback + data field to io_rsrc_node, that still a much better Right, and I don't think it's a problem at all, for most of the users destroying a resource is cold path anyway, apart from this zc proposal nobody registers file/buffer just for one request. > approach than having some other intermediate type which basically does > the same thing, except it needs new fields to store it and new helpers > to alloc/put it.
On Thu, Oct 31, 2024 at 07:35:35AM -0600, Jens Axboe wrote: > On 10/30/24 8:53 PM, Ming Lei wrote: > > On Wed, Oct 30, 2024 at 07:20:48AM -0600, Jens Axboe wrote: > >> On 10/29/24 10:11 PM, Ming Lei wrote: > >>> On Wed, Oct 30, 2024 at 11:08:16AM +0800, Ming Lei wrote: > >>>> On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote: > >>> > >>> ... > >>> > >>>>> You could avoid the OP dependency with just a flag, if you really wanted > >>>>> to. But I'm not sure it makes a lot of sense. And it's a hell of a lot > >>>> > >>>> Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra > >>>> syscall makes application too complicated, and IO latency is increased. > >>>> > >>>>> simpler than the sqe group scheme, which I'm a bit worried about as it's > >>>>> a bit complicated in how deep it needs to go in the code. This one > >>>>> stands alone, so I'd strongly encourage we pursue this a bit further and > >>>>> iron out the kinks. Maybe it won't work in the end, I don't know, but it > >>>>> seems pretty promising and it's soooo much simpler. > >>>> > >>>> If buffer register and lookup are always done in ->prep(), OP dependency > >>>> may be avoided. > >>> > >>> Even all buffer register and lookup are done in ->prep(), OP dependency > >>> still can't be avoided completely, such as: > >>> > >>> 1) two local buffers for sending to two sockets > >>> > >>> 2) group 1: IORING_OP_LOCAL_KBUF1 & [send(sock1), send(sock2)] > >>> > >>> 3) group 2: IORING_OP_LOCAL_KBUF2 & [send(sock1), send(sock2)] > >>> > >>> group 1 and group 2 needs to be linked, but inside each group, the two > >>> sends may be submitted in parallel. > >> > >> That is where groups of course work, in that you can submit 2 groups and > >> have each member inside each group run independently. But I do think we > >> need to decouple the local buffer and group concepts entirely. For the > >> first step, getting local buffers working with zero copy would be ideal, > >> and then just live with the fact that group 1 needs to be submitted > >> first and group 2 once the first ones are done. > > > > IMHO, it is one _kernel_ zero copy(_performance_) feature, which often > > imply: > > > > - better performance expectation > > - no big change on existed application for using this feature > > For #2, really depends on what it is. But ideally, yes, agree. > > > Application developer is less interested in sort of crippled or > > immature feature, especially need big change on existed code > > logic(then two code paths need to maintain), with potential > > performance regression. > > > > With sqe group and REQ_F_GROUP_KBUF, application just needs lines of > > code change for using the feature, and it is pretty easy to evaluate > > the feature since no any extra logic change & no extra syscall/wait > > introduced. The whole patchset has been mature enough, unfortunately > > blocked without obvious reasons. > > Let me tell you where I'm coming from. If you might recall, I originated > the whole grouping idea. Didn't complete it, but it's essentially the > same concept as REQ_F_GROUP_KBUF in that you have some dependents on a > leader, and the dependents can run in parallel rather than being > serialized by links. I'm obviously in favor of this concept, but I want > to see it being done in such a way that it's actually something we can > reason about and maintain. You want it for zero copy, which makes sense, > but I also want to ensure it's a CLEAN implementation that doesn't have > tangles in places it doesn't need to. > > You seem to be very hard to convince of making ANY changes at all. In > your mind the whole thing is done, and it's being "blocked without > obvious reason". It's not being blocked at all, I've been diligently > trying to work with you recently on getting this done. I'm at least as > interested as you in getting this work done. But I want you to work with > me a bit on some items so we can get it into a shape where I'm happy > with it, and I can maintain it going forward. > > So, please, rather than dig your heels in all the time, have an open > mind on how we can accomplish some of these things. > > > >> Once local buffers are done, we can look at doing the sqe grouping in a > >> nice way. I do think it's a potentially powerful concept, but we're > >> going to make a lot more progress on this issue if we carefully separate > >> dependencies and get each of them done separately. > > > > One fundamental difference between local buffer and REQ_F_GROUP_KBUF is > > > > - local buffer has to be provided and used in ->prep() > > - REQ_F_GROUP_KBUF needs to be provided in ->issue() instead of ->prep() > > It does not - the POC certainly did it in ->prep(), but all it really > cares about is having the ring locked. ->prep() always has that, > ->issue() _normally_ has that, unless it ends up in an io-wq punt. > > You can certainly do it in ->issue() and still have it be per-submit, > the latter which I care about for safety reasons. This just means it has > to be provided in the _first_ issue, and that IOSQE_ASYNC must not be > set on the request. I think that restriction is fine, nobody should > really be using IOSQE_ASYNC anyway. Yes, IOSQE_ASYNC can't work, and IO_LINK can't work too. The restriction on IO_LINK is too strong, because it needs big application change. > > I think the original POC maybe did more harm than good in that it was > too simplistic, and you seem too focused on the limits of that. So let I am actually trying to thinking how to code local buffer, that is why I puts these questions first. > me detail what it actually could look like. We have io_submit_state in > io_ring_ctx. This is per-submit private data, it's initialized and > flushed for each io_uring_enter(2) that submits requests. > > We have a registered file and buffer table, file_table and buf_table. > These have life times that are dependent on the ring and > registration/unregistration. We could have a local_table. This one > should be setup by some register command, eg reserving X slots for that. > At the end of submit, we'd flush this table, putting nodes in there. > Requests can look at the table in either prep or issue, and find buffer > nodes. If a request uses one of these, it grabs a ref and hence has it > available until it puts it at IO completion time. When a single submit > context is done, local_table is iterated (if any entries exist) and > existing nodes cleared and put. > > That provides a similar table to buf_table, but with a lifetime of a > submit. Just like local buf. Yes it would not be private to a single > group, it'd be private to a submit which has potentially bigger scope, > but that should not matter. > > That should give you exactly what you need, if you use > IORING_RSRC_KBUFFER in the local_table. But it could even be used for > IORING_RSRC_BUFFER as well, providing buffers for a single submit cycle > as well. > > Rather than do something completely on the side with > io_uring_kernel_buf, we can use io_rsrc_node and io_mapped_ubuf for > this. Which goes back to my initial rant in this email - use EXISTING > infrastructure for these things. A big part of why this isn't making > progress is that a lot of things are done on the side rather than being > integrated. Then you need extra io_kiocb members, where it really should > just be using io_rsrc_node and get everything else for free. No need to > do special checking and putting seperately, it's a resource node just > like any other resource node we already support. > > > The only common code could be one buffer abstraction for OP to use, but > > still used differently, ->prep() vs. ->issue(). > > With the prep vs issue thing not being an issue, then it sounds like we IO_LINK is another blocker for prep vs issue thing. > fully agree that a) it should be one buffer abstraction, and b) we Yes, can't agree more. > already have the infrastructure for this. We just need to add > IORING_RSRC_KBUFFER, which I already posted some POC code for. I am open to IORING_RSRC_KBUFFER if there isn't too strong limit for application. Disallowing IOSQE_ASYNC is fine, but not allowing IO_LINK does cause trouble for application, and need big change on app code. > > > So it is hard to call it decouple, especially REQ_F_GROUP_KBUF has been > > simple enough, and the main change is to import it in OP code. > > > > Local buffer is one smart idea, but I hope the following things may be > > settled first: > > > > 1) is it generic enough to just allow to provide local buffer during > > ->prep()? > > > > - this way actually becomes sync & nowait IO, instead AIO, and has been > > one strong constraint from UAPI viewpoint. > > > > - Driver may need to wait until some data comes, then return & provide > > the buffer with data, and local buffer can't cover this case > > This should be moot now with the above explanation. > > > 2) is it allowed to call ->uring_cmd() from io_uring_cmd_prep()? If not, > > any idea to call into driver for leasing the kernel buffer to io_uring? > > Ditto > > > 3) in OP code, how to differentiate normal userspace buffer select with > > local buffer? And how does OP know if normal buffer select or local > > kernel buffer should be used? Some OP may want to use normal buffer > > select instead of local buffer, others may want to use local buffer. > > Yes this is a key question we need to figure out. Right now using fixed > buffers needs to set ->buf_index, and the OP needs to know aboout it. > let's not confuse it with buffer select, IOSQE_BUFFER_SELECT, as that's > for provided buffers. Indeed, here what matters is fixed buffer. > > > 4) arbitrary numbers of local buffer needs to be supported, since IO > > often comes at batch, it shouldn't be hard to support it by adding xarray > > to submission state, what do you think of this added complexity? Without > > supporting arbitrary number of local buffers, performance can be just > > bad, it doesn't make sense as zc viewpoint. Meantime as number of local > > buffer is increased, more rsrc_node & imu allocation is introduced, this > > still may degrade perf a bit. > > That's fine, we just need to reserve space for them upfront. I don't > like the xarray idea, as: > > 1) xarray does internal locking, which we don't need here > 2) The existing io_rsrc_data table is what is being used for > io_rsrc_node management now. This would introduce another method for > that. > > I do want to ensure that io_submit_state_finish() is still low overhead, > and using an xarray would be more expensive than just doing: > > if (ctx->local_table.nr) > flush_nodes(); > > as you'd need to always setup an iterator. But this isn't really THAT > important. The benefit of using an xarray would be that we'd get > flexible storing of members without needing pre-registration, obviously. The main trouble could be buffer table pre-allocation if xarray isn't used. > > > 5) io_rsrc_node becomes part of interface between io_uring and driver > > for releasing the leased buffer, so extra data has to be > > added to `io_rsrc_node` for driver use. > > That's fine imho. The KBUFFER addition already adds the callback, we can > add a data thing too. The kernel you based your code on has an > io_rsrc_node that is 48 bytes in size, and my current tree has one where > it's 32 bytes in size after the table rework. If we have to add 2x8b to > support this, that's NOT a big deal and we just end up with a node > that's the same size as before. > > And we get rid of this odd intermediate io_uring_kernel_buf struct, > which is WAY bigger anyway, and requires TWO allocations where the > existing io_mapped_ubuf embeds the bvec. I'd argue two vs one allocs is > a much bigger deal for performance reasons. The structure is actually preallocated in ublk, so it is zero allocation in my patchset in case of non bio merge. > > As a final note, one thing I mentioned in an earlier email is that part > of the issue here is that there are several things that need ironing > out, and they are actually totally separate. One is the buffer side, > which this email mostly deals with, the other one is the grouping > concept. > > For the sqe grouping, one sticking point has been using that last > sqe->flags bit. I was thinking about this last night, and what if we got > away from using a flag entirely? At some point io_uring needs to deal > with this flag limitation, but it's arguably a large effort, and I'd > greatly prefer not having to paper over it to shove in grouped SQEs. > > So... what if we simply added a new OP, IORING_OP_GROUP_START, or > something like that. Hence instead of having a designated group leader > bit for an OP, eg: > > sqe = io_uring_get_sqe(ring); > io_uring_prep_read(sqe, ...); > sqe->flags |= IOSQE_GROUP_BIT; > > you'd do: > > sqe = io_uring_get_sqe(ring); > io_uring_prep_group_start(sqe, ...); > sqe->flags |= IOSQE_IO_LINK; One problem is how to map IORING_OP_GROUP_START to uring_cmd. IOSQE_IO_LINK isn't another trouble, because it becomes not possible to model dependency among groups. > > sqe = io_uring_get_sqe(ring); > io_uring_prep_read(sqe, ...); > > which would be the equivalent transformation - the read would be the > group leader as it's the first member of that chain. The read should set > IOSQE_IO_LINK for as long as it has members. The members in that group > would NOT be serialized. They would use IOSQE_IO_LINK purely to be part > of that group, but IOSQE_IO_LINK would not cause them to be serialized. > Hence the link just implies membership, not ordering within the group. > > This removes the flag issue, with the sligth caveat that IOSQE_IO_LINK > has a different meaning inside the group. Maybe you'd need a GROUP_END No, for group leader IOSQE_IO_LINK works same as any other normal SQE. Thanks, Ming
On Thu, Oct 31, 2024 at 09:07:18AM -0600, Jens Axboe wrote: > Another option is that we fully stick with the per-group buffer concept, > which could also work just fine with io_rsrc_node. If we stick with the > OP_GROUP_START thing, then that op could setup group_buf thing that is > local to that group. This is where an instantiated buffer would appear > too, keeping it strictly local to that group. That avoids needing any > kind of ring state for this, and the group_buf would be propagated from > the group leader to the members. The group_buf lives until all members > of the group are dead, at which point it's released. I forget if your > grouping implementation mandated the same scheme I originally had, where > the group leader completes last? If it does, then it's a natural thing Yes, the group leader's CQE is posted after all members' CQE are posted, and the leader request is freed after all member request is freed. > to have the group_buf live for the duration of the group leader, and it > can just be normal per-io_kiocb data at that point, nothing special > needed there. That is basically what GROUP_KBUF is written, just not using io_rsrc_node. > > As with the previous scheme, each request using one of these > IORING_RSRC_KBUFFER nodes just assigns it like it would any other fixed > resource node, and the normal completion path puts it. Here the thing is simpler, all member can just use leader's io_rsrc_node, leader's rsrc_node reference is grabbed & leased to member when adding member to group, and released by the normal io_req_put_rsrc_nodes(). Thanks, Ming