Message ID | cover.1701268753.git.robin.murphy@arm.com (mailing list archive) |
---|---|
Headers | show |
Series | dma-mapping: Clean up arch_setup_dma_ops() | expand |
On Wed, Nov 29, 2023 at 05:42:57PM +0000, Robin Murphy wrote: > Hi all, > > Prompted by Jason's proposal[1], here's a first step towards truly > unpicking the dma_configure vs. IOMMU mess. As I commented before, we > have an awful lot of accumulated cruft and technical debt here making > things more complicated than they need to be, and we already have hacks > on top of hacks trying to work around it, so polishing those hacks even > further is really not a desirable direction of travel. And I do know > they're hacks, because I wrote most of them and still remember enough of > the context of the time ;) I quite like this, I was also looking at getting rid of those other parameters. I wanted to take smaller steps because it is all pretty hairy. One thing that still concerns me is if the FW data restricts the valid IOVA window that really should be reflected into the reserved ranges and not just dumped into the struct device for use by the DMA API. Or, perhaps, viof/iommufd should be using the struct device data to generate some additional reserved ranges? Either way, I would like to see the dma_iommu and the rest of the subsystem agree on what the valid IOVA ranges actually are. Jason
On 29/11/2023 8:36 pm, Jason Gunthorpe wrote: > On Wed, Nov 29, 2023 at 05:42:57PM +0000, Robin Murphy wrote: >> Hi all, >> >> Prompted by Jason's proposal[1], here's a first step towards truly >> unpicking the dma_configure vs. IOMMU mess. As I commented before, we >> have an awful lot of accumulated cruft and technical debt here making >> things more complicated than they need to be, and we already have hacks >> on top of hacks trying to work around it, so polishing those hacks even >> further is really not a desirable direction of travel. And I do know >> they're hacks, because I wrote most of them and still remember enough of >> the context of the time ;) > > I quite like this, I was also looking at getting rid of those other > parameters. > > I wanted to take smaller steps because it is all pretty hairy. > > One thing that still concerns me is if the FW data restricts the valid > IOVA window that really should be reflected into the reserved ranges > and not just dumped into the struct device for use by the DMA API. > > Or, perhaps, viof/iommufd should be using the struct device data to > generate some additional reserved ranges? > > Either way, I would like to see the dma_iommu and the rest of the > subsystem agree on what the valid IOVA ranges actually are. Note that there is some intentional divergence where iommu-dma reserves IOVAs matching PCI outbound windows because it knows it wants to avoid clashing with potential peer-to-peer addresses and doesn't want to have to get into the details of ACS redirect etc., but we don't expose those as generic reserved regions because they're firmly a property of the PCI host bridge, not of the IOMMU group (and more practically, because we did do so briefly and it made QEMU unhappy). I think there may also have been some degree of conclusion that it's not the IOMMU API's place to get in the way of other domain users trying to do weird P2P stuff if they really want to. Another issue is that the generic dma_range_map strictly represents device-specific constraints which may not always be desirable or appropriate to apply to a whole group. There wasn't really a conscious decision as such, but it kind of works out as why we still only consider PCI's bridge->dma_ranges (which comes from the same underlying data), since we can at least assume every device behind a bridge accesses memory through that bridge and so inherits its restrictions. However I don't recall any conscious decision for inbound windows to only be considered for DMA domain reservations rather than for proper reserved regions - pretty sure that's just a case of that code being added in the place where it seemed to fit best at the time (because hey it's more host bridge windows and we already have a thing for host bridge windows...) Thanks, Robin.
On Fri, Dec 01, 2023 at 01:07:36PM +0000, Robin Murphy wrote: > On 29/11/2023 8:36 pm, Jason Gunthorpe wrote: > > On Wed, Nov 29, 2023 at 05:42:57PM +0000, Robin Murphy wrote: > > > Hi all, > > > > > > Prompted by Jason's proposal[1], here's a first step towards truly > > > unpicking the dma_configure vs. IOMMU mess. As I commented before, we > > > have an awful lot of accumulated cruft and technical debt here making > > > things more complicated than they need to be, and we already have hacks > > > on top of hacks trying to work around it, so polishing those hacks even > > > further is really not a desirable direction of travel. And I do know > > > they're hacks, because I wrote most of them and still remember enough of > > > the context of the time ;) > > > > I quite like this, I was also looking at getting rid of those other > > parameters. > > > > I wanted to take smaller steps because it is all pretty hairy. > > > > One thing that still concerns me is if the FW data restricts the valid > > IOVA window that really should be reflected into the reserved ranges > > and not just dumped into the struct device for use by the DMA API. > > > > Or, perhaps, viof/iommufd should be using the struct device data to > > generate some additional reserved ranges? > > > > Either way, I would like to see the dma_iommu and the rest of the > > subsystem agree on what the valid IOVA ranges actually are. > > Note that there is some intentional divergence where iommu-dma reserves > IOVAs matching PCI outbound windows because it knows it wants to avoid > clashing with potential peer-to-peer addresses and doesn't want to have to > get into the details of ACS redirect etc., but we don't expose those as > generic reserved regions because they're firmly a property of the PCI host > bridge, not of the IOMMU group (and more practically, because we did do so > briefly and it made QEMU unhappy). I think there may also have been some > degree of conclusion that it's not the IOMMU API's place to get in the way > of other domain users trying to do weird P2P stuff if they really want to. I'm not sure this is the fully correct conclusion - eg if today we take a NIC device on a non-ACS topology and run DPDK through VFIO it has a chance of failure because some IOVA simply cannot be used by DPDK for DMA at all. qemu and kvm are a different situation that want different things. Eg it would want to identity map the PCI BAR spaces to the IOVA they are claiming. It should still somehow carve out any other IOVA that is unusable due to guest-invisible ACS and reflect it through FW tables into the VM. I'm starting to see people build non-ACS systems and want it to work with VFIO and I'm a little worried we have been too loose here. > bridge and so inherits its restrictions. However I don't recall any > conscious decision for inbound windows to only be considered for DMA domain > reservations rather than for proper reserved regions - pretty sure that's > just a case of that code being added in the place where it seemed to fit > best at the time (because hey it's more host bridge windows and we already > have a thing for host bridge windows...) Yeah, and I don't think anyone actually cared much.. At least as a step it would be nice if the DMA API only restrictions can come out as a special type of reserved region. Then the caller could decide if they want to follow them or not. iommufd could provide an opt-in API to DPDK that matches DMA API's safe IOVA allocator. Jason