Message ID | 169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net (mailing list archive) |
---|---|
Headers | show |
Series | Exploring biovec support in (R)DMA API | expand |
On Thu, Oct 19, 2023 at 11:25:31AM -0400, Chuck Lever wrote: > The SunRPC stack manages pages (and eventually, folios) via an > array of struct biovec items within struct xdr_buf. We have not > fully committed to replacing the struct page array in xdr_buf > because, although the socket API supports biovec arrays, the RDMA > stack uses struct scatterlist rather than struct biovec. > > This (incomplete) series explores what it might look like if the > RDMA core API could support struct biovec array arguments. The > series compiles on x86, but I haven't tested it further. I'm posting > early in hopes of starting further discussion. Good call, because I think patch 2/9 is a complete non-starter. The fundamental problem with scatterlist is that it is both input and output for the mapping operation. You're replicating this mistake in a different data structure. My vision for the future is that we have phyr as our input structure. That looks something like: struct phyr { phys_addr_t start; size_t len; }; On 32-bit, that's 8 or 12 bytes; on 64-bit it's 16 bytes. This is better than biovec because biovec is sometimes larger than that, and it allows specifying IO to memory that does not have a struct page. Our output structure can continue being called the scatterlist, but it needs to go on a diet and look more like: struct scatterlist { dma_addr_t dma_address; size_t dma_length; }; Getting to this point is going to be a huge amount of work, and I need to finish folios first. Or somebody else can work on it ;-)
On 19/10/2023 4:25 pm, Chuck Lever wrote: > The SunRPC stack manages pages (and eventually, folios) via an > array of struct biovec items within struct xdr_buf. We have not > fully committed to replacing the struct page array in xdr_buf > because, although the socket API supports biovec arrays, the RDMA > stack uses struct scatterlist rather than struct biovec. > > This (incomplete) series explores what it might look like if the > RDMA core API could support struct biovec array arguments. The > series compiles on x86, but I haven't tested it further. I'm posting > early in hopes of starting further discussion. > > Are there other upper layer API consumers, besides SunRPC, who might > prefer the use of biovec over scatterlist? > > Besides handling folios as well as single pages in bv_page, what > other work might be needed in the DMA layer? Eww, please no. It's already well established that the scatterlist design is horrible and we want to move to something sane and actually suitable for modern DMA scenarios. Something where callers can pass a set of pages/physical address ranges in, and get a (separate) set of DMA ranges out. Without any bonkers packing of different-length lists into the same list structure. IIRC Jason did a bit of prototyping a while back, but it may be looking for someone else to pick up the idea and give it some more attention. What we definitely don't what at this point is a copy-paste of the same bad design with all the same problems. I would have to NAK patch 8 on principle, because the existing iommu_dma_map_sg() stuff has always been utterly mad, but it had to be to work around the limitations of the existing scatterlist design while bridging between two other established APIs; there's no good excuse for having *two* copies of all that to maintain if one doesn't have an existing precedent to fit into. Thanks, Robin. > What RDMA core APIs should be converted? IMO a DMA mapping and > registration API for biovecs would be needed. Maybe RDMA Read and > Write too? > > --- > > Chuck Lever (9): > dma-debug: Fix a typo in a debugging eye-catcher > bvec: Add bio_vec fields to manage DMA mapping > dma-debug: Add dma_debug_ helpers for mapping bio_vec arrays > mm: kmsan: Add support for DMA mapping bio_vec arrays > dma-direct: Support direct mapping bio_vec arrays > DMA-API: Add dma_sync_bvecs_for_cpu() and dma_sync_bvecs_for_device() > DMA: Add dma_map_bvecs_attrs() > iommu/dma: Support DMA-mapping a bio_vec array > RDMA: Add helpers for DMA-mapping an array of bio_vecs > > > drivers/iommu/dma-iommu.c | 368 ++++++++++++++++++++++++++++++++++++ > drivers/iommu/iommu.c | 58 ++++++ > include/linux/bvec.h | 143 ++++++++++++++ > include/linux/dma-map-ops.h | 8 + > include/linux/dma-mapping.h | 9 + > include/linux/iommu.h | 4 + > include/linux/kmsan.h | 20 ++ > include/rdma/ib_verbs.h | 29 +++ > kernel/dma/debug.c | 165 +++++++++++++++- > kernel/dma/debug.h | 38 ++++ > kernel/dma/direct.c | 92 +++++++++ > kernel/dma/direct.h | 17 ++ > kernel/dma/mapping.c | 93 +++++++++ > mm/kmsan/hooks.c | 13 ++ > 14 files changed, 1056 insertions(+), 1 deletion(-) > > -- > Chuck Lever >
On Thu, Oct 19, 2023 at 04:53:43PM +0100, Matthew Wilcox wrote: > On Thu, Oct 19, 2023 at 11:25:31AM -0400, Chuck Lever wrote: > > The SunRPC stack manages pages (and eventually, folios) via an > > array of struct biovec items within struct xdr_buf. We have not > > fully committed to replacing the struct page array in xdr_buf > > because, although the socket API supports biovec arrays, the RDMA > > stack uses struct scatterlist rather than struct biovec. > > > > This (incomplete) series explores what it might look like if the > > RDMA core API could support struct biovec array arguments. The > > series compiles on x86, but I haven't tested it further. I'm posting > > early in hopes of starting further discussion. > > Good call, because I think patch 2/9 is a complete non-starter. > > The fundamental problem with scatterlist is that it is both input > and output for the mapping operation. You're replicating this mistake > in a different data structure. Fwiw, I'm not at all wedded to the "copy-and-paste SGL" approach. > My vision for the future is that we have phyr as our input structure. > That looks something like: > > struct phyr { > phys_addr_t start; > size_t len; > }; > > On 32-bit, that's 8 or 12 bytes; on 64-bit it's 16 bytes. This is > better than biovec because biovec is sometimes larger than that, and > it allows specifying IO to memory that does not have a struct page. Passing a folio rather than a page is indeed one of the benefits we would like to gain for SunRPC. > Our output structure can continue being called the scatterlist, but > it needs to go on a diet and look more like: > > struct scatterlist { > dma_addr_t dma_address; > size_t dma_length; > }; > > Getting to this point is going to be a huge amount of work, and I need > to finish folios first. Or somebody else can work on it ;-) I would like to see forward progress, as SunRPC has some skin in this game. I'm happy to contribute code or review. If there is some consensus on your proposed approach, I can start with that.
On Thu, Oct 19, 2023 at 05:43:11PM +0100, Robin Murphy wrote: > On 19/10/2023 4:25 pm, Chuck Lever wrote: > > The SunRPC stack manages pages (and eventually, folios) via an > > array of struct biovec items within struct xdr_buf. We have not > > fully committed to replacing the struct page array in xdr_buf > > because, although the socket API supports biovec arrays, the RDMA > > stack uses struct scatterlist rather than struct biovec. > > > > This (incomplete) series explores what it might look like if the > > RDMA core API could support struct biovec array arguments. The > > series compiles on x86, but I haven't tested it further. I'm posting > > early in hopes of starting further discussion. > > > > Are there other upper layer API consumers, besides SunRPC, who might > > prefer the use of biovec over scatterlist? > > > > Besides handling folios as well as single pages in bv_page, what > > other work might be needed in the DMA layer? > > Eww, please no. It's already well established that the scatterlist design is > horrible and we want to move to something sane and actually suitable for > modern DMA scenarios. Something where callers can pass a set of > pages/physical address ranges in, and get a (separate) set of DMA ranges > out. Without any bonkers packing of different-length lists into the same > list structure. IIRC Jason did a bit of prototyping a while back, but it may > be looking for someone else to pick up the idea and give it some more > attention. I put it aside for the moment as the direction changed after the conference somewhat. > What we definitely don't what at this point is a copy-paste of the same bad > design with all the same problems. I would have to NAK patch 8 on principle, > because the existing iommu_dma_map_sg() stuff has always been utterly mad, > but it had to be to work around the limitations of the existing scatterlist > design while bridging between two other established APIs; there's no good > excuse for having *two* copies of all that to maintain if one doesn't have > an existing precedent to fit into. The idea from HCH I've been going toward was to allow each subsystem to do what made sense for it. The dma api would provide some more generic interfaces that could be used to implement a map_sg without having to be tightly coupled to the DMA subsystem itself. The concept would be to allow something like NVMe to go directly from current BIO into its native HW format, without having to do a round trip into an intermediate storage array. How this formulates to RDMA work requests I haven't thought about, this is a large enough thing that I need some mlx5 driver support to do the first step and that was supposed to be this month but a war has caused some delay :( RDMA has a complicated historical relationship to the dma_api, sadly. This plan also wants the significant archs to all use the common dma-iommu - now that S390 is migrated only power remains... Jason
On Thu, Oct 19, 2023 at 04:53:43PM +0100, Matthew Wilcox wrote: > > RDMA core API could support struct biovec array arguments. The > > series compiles on x86, but I haven't tested it further. I'm posting > > early in hopes of starting further discussion. > > Good call, because I think patch 2/9 is a complete non-starter. > > The fundamental problem with scatterlist is that it is both input > and output for the mapping operation. You're replicating this mistake > in a different data structure. Agreed. > > My vision for the future is that we have phyr as our input structure. > That looks something like: > > struct phyr { > phys_addr_t start; > size_t len; > }; So my plan was always to turn the bio_vec into that structure, since before you came u wit hthe phyr name. But that's really a separate discussion as we might as well support multiple input formats if we really have to. > Our output structure can continue being called the scatterlist, but > it needs to go on a diet and look more like: > > struct scatterlist { > dma_addr_t dma_address; > size_t dma_length; > }; I called it a dma_vec in my years old proposal I can't find any more. > Getting to this point is going to be a huge amount of work, and I need > to finish folios first. Or somebody else can work on it ;-) Well, we can stage this. I wish I could find my old proposal about the dma_batch API (I remember Robin commented on it, my he is better at finding it than me). I think that mostly still stands, independent of the transformation of the input structure. The basic idea is that we add a dma batching API, where you start a batch with one call, and then add new physically discontiguous vectors to add it until it is full and finalized it. Very similar to how the iommu API works internally. We'd then only use this API if we actually have an iommu (or if we want to be fancy swiotlb that could do the same linearization), for the direct map we'd still do the equivalent of dma_map_page for each element as we need one output vector per input vector anyway. As Jason pointed out the only fancy implementation we need for now is the IOMMU API. arm32 and powerpc will need to do the work to convert to it or do their own work.
On 2023-10-20 05:58, Christoph Hellwig wrote: > On Thu, Oct 19, 2023 at 04:53:43PM +0100, Matthew Wilcox wrote: >>> RDMA core API could support struct biovec array arguments. The >>> series compiles on x86, but I haven't tested it further. I'm posting >>> early in hopes of starting further discussion. >> >> Good call, because I think patch 2/9 is a complete non-starter. >> >> The fundamental problem with scatterlist is that it is both input >> and output for the mapping operation. You're replicating this mistake >> in a different data structure. > > Agreed. > >> >> My vision for the future is that we have phyr as our input structure. >> That looks something like: >> >> struct phyr { >> phys_addr_t start; >> size_t len; >> }; > > So my plan was always to turn the bio_vec into that structure, since > before you came u wit hthe phyr name. But that's really a separate > discussion as we might as well support multiple input formats if we > really have to. > >> Our output structure can continue being called the scatterlist, but >> it needs to go on a diet and look more like: >> >> struct scatterlist { >> dma_addr_t dma_address; >> size_t dma_length; >> }; > > I called it a dma_vec in my years old proposal I can't find any more. > >> Getting to this point is going to be a huge amount of work, and I need >> to finish folios first. Or somebody else can work on it ;-) > > Well, we can stage this. I wish I could find my old proposal about the > dma_batch API (I remember Robin commented on it, my he is better at > finding it than me). Heh, the dirty secret is that Office 365 is surprisingly effective at searching 9 years worth of email I haven't deleted :) https://lore.kernel.org/linux-iommu/79926b59-0eb9-2b88-b1bb-1bd472b10370@arm.com/ > I think that mostly still stands, independent > of the transformation of the input structure. The basic idea is that > we add a dma batching API, where you start a batch with one call, > and then add new physically discontiguous vectors to add it until > it is full and finalized it. Very similar to how the iommu API > works internally. We'd then only use this API if we actually have > an iommu (or if we want to be fancy swiotlb that could do the same > linearization), for the direct map we'd still do the equivalent > of dma_map_page for each element as we need one output vector per > input vector anyway. The other thing that's clear by now is that I think we definitely want distinct APIs for "please map this bunch of disjoint things" for true scatter-gather cases like biovecs where it's largely just convenient to keep them grouped together (but opportunistic merging might still be a bonus), vs. "please give me a linearised DMA mapping of these pages (and fail if you can't)" for the dma-buf style cases. Cheers, Robin. > As Jason pointed out the only fancy implementation we need for now > is the IOMMU API. arm32 and powerpc will need to do the work > to convert to it or do their own work.
On Fri, Oct 20, 2023 at 11:30:06AM +0100, Robin Murphy wrote: >> Well, we can stage this. I wish I could find my old proposal about the >> dma_batch API (I remember Robin commented on it, my he is better at >> finding it than me). > > Heh, the dirty secret is that Office 365 is surprisingly effective at > searching 9 years worth of email I haven't deleted :) > > https://lore.kernel.org/linux-iommu/79926b59-0eb9-2b88-b1bb-1bd472b10370@arm.com/ Perfect, thanks! > The other thing that's clear by now is that I think we definitely want > distinct APIs for "please map this bunch of disjoint things" for true > scatter-gather cases like biovecs where it's largely just convenient to > keep them grouped together (but opportunistic merging might still be a > bonus), vs. "please give me a linearised DMA mapping of these pages (and > fail if you can't)" for the dma-buf style cases. Hmm, I'm not sure I agree. For both the iommu and swiotlb case we get the linear mapping for free with small limitations: - for the iommu case the alignment needs to be a multiple of the iommu page size - for swiotlb the size of each mapping is very limited If these conditions are matched we can linearize for free, if they aren't we can't linearize at all. But maybe I'm missing something?