Message ID | 20190508132403.1454-5-paul.durrant@citrix.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | iommu groups + cleanup | expand |
On Wed, May 08, 2019 at 02:24:02PM +0100, Paul Durrant wrote: > Some devices may share a single PCIe initiator id, e.g. if they are actually > legacy PCI devices behind a bridge, and hence DMA from such devices will > be subject to the same address translation in the IOMMU. Hence these devices > should be treated as a unit for the purposes of assignment. There are also > other reasons why multiple devices should be treated as a unit, e.g. those > subject to a shared RMRR or those downstream of a bridge that does not > support ACS. > > This patch introduces a new struct iommu_group to act as a container for > devices that should be treated as a unit, and builds a list of them as > PCI devices are scanned. The iommu_ops already implement a method, > get_device_group_id(), that is seemingly intended to return the initiator > id for a given SBDF so use this as the mechanism for group assignment in > the first instance. Assignment based on shared RMRR or lack of ACS will be > dealt with in subsequent patches, as will modifications to the device > assignment code. > > Signed-off-by: Paul Durrant <paul.durrant@citrix.com> > --- > Cc: Jan Beulich <jbeulich@suse.com> > Cc: Andrew Cooper <andrew.cooper3@citrix.com> > Cc: George Dunlap <George.Dunlap@eu.citrix.com> > Cc: Ian Jackson <ian.jackson@eu.citrix.com> > Cc: Julien Grall <julien.grall@arm.com> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> > Cc: Stefano Stabellini <sstabellini@kernel.org> > Cc: Tim Deegan <tim@xen.org> > Cc: Wei Liu <wei.liu2@citrix.com> > --- > xen/drivers/passthrough/iommu.c | 76 +++++++++++++++++++++++++++++++++++++++++ > xen/drivers/passthrough/pci.c | 3 ++ > xen/include/xen/iommu.h | 7 ++++ > xen/include/xen/pci.h | 3 ++ > 4 files changed, 89 insertions(+) > > diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c > index d3a6199b77..11319fbaae 100644 > --- a/xen/drivers/passthrough/iommu.c > +++ b/xen/drivers/passthrough/iommu.c > @@ -655,6 +655,82 @@ static void iommu_dump_p2m_table(unsigned char key) > } > } > > +#ifdef CONFIG_HAS_PCI > + > +struct iommu_group { > + unsigned int id; > + unsigned int index; I'm not sure I see the point of the index field, isn't it enough to just use the ID field? The ID is already used as a unique key in the code below for the radix tree operations. > + struct list_head devs_list; > +}; > + > +static struct radix_tree_root iommu_groups; > + > +void __init iommu_groups_init(void) > +{ > + radix_tree_init(&iommu_groups); > +} > + > +static struct iommu_group *alloc_iommu_group(unsigned int id) > +{ > + struct iommu_group *grp; > + static unsigned int index; > + > + grp = xzalloc(struct iommu_group); Can be moved with the declaration above. > + if ( !grp ) > + return NULL; > + > + grp->id = id; > + grp->index = index++; AFAICT none of this is subject to races because it's always protected by the pcidevs lock? > + INIT_LIST_HEAD(&grp->devs_list); > + > + if ( radix_tree_insert(&iommu_groups, id, grp) ) > + { > + xfree(grp); > + grp = NULL; Do you need to decrease index here, or is it fine to burn numbers on failure? > + } > + > + return grp; > +} > + > +static struct iommu_group *get_iommu_group(unsigned int id) > +{ > + struct iommu_group *grp = radix_tree_lookup(&iommu_groups, id); > + > + if ( !grp ) > + grp = alloc_iommu_group(id); > + > + return grp; > +} > + > +int iommu_group_assign(struct pci_dev *pdev) > +{ > + const struct iommu_ops *ops; > + unsigned int id; > + struct iommu_group *grp; > + > + ops = iommu_get_ops(); This initialization can be done at declaration time. > + if ( !ops || !ops->get_device_group_id ) > + return 0; > + > + id = ops->get_device_group_id(pdev->seg, pdev->bus, pdev->devfn); > + grp = get_iommu_group(id); > + > + if ( ! grp ) ^ extra space > + return -ENOMEM; > + > + if ( iommu_verbose ) > + printk(XENLOG_INFO "Assign %04x:%02x:%02x.%u -> IOMMU group %u\n", > + pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn), > + PCI_FUNC(pdev->devfn), grp->index); Wouldn't it be more helpful to print the group ID rather than the Xen internal index? Thanks, Roger.
>>> On 08.05.19 at 15:24, <paul.durrant@citrix.com> wrote: > --- a/xen/drivers/passthrough/iommu.c > +++ b/xen/drivers/passthrough/iommu.c > @@ -655,6 +655,82 @@ static void iommu_dump_p2m_table(unsigned char key) > } > } > > +#ifdef CONFIG_HAS_PCI > + > +struct iommu_group { > + unsigned int id; > + unsigned int index; > + struct list_head devs_list; > +}; Could these additions as a whole go into a new groups.c? > +int iommu_group_assign(struct pci_dev *pdev) > +{ > + const struct iommu_ops *ops; > + unsigned int id; > + struct iommu_group *grp; > + > + ops = iommu_get_ops(); > + if ( !ops || !ops->get_device_group_id ) The way iommu_get_ops() works the left side of the || is pointless. > + return 0; > + > + id = ops->get_device_group_id(pdev->seg, pdev->bus, pdev->devfn); > + grp = get_iommu_group(id); I don't think solitary devices should be allocated a group. Also you don't handle failure of ops->get_device_group_id(). > + if ( ! grp ) Nit: Stray blank. > --- a/xen/include/xen/pci.h > +++ b/xen/include/xen/pci.h > @@ -75,6 +75,9 @@ struct pci_dev { > struct list_head alldevs_list; > struct list_head domain_list; > > + struct list_head grpdevs_list; Does this separate list provide much value? The devices in a group are going to move between two domain_list-s all in one go, so once you know the domain of one you'll be able to find the rest by iterating that domain's list. Is the fear that such an iteration may be tens of thousands of entries long, and hence become an issue when traversed? I have no idea how many PCI devices the biggest systems today would have, but if traversal was an issue, then it would already be with the code we've got now. Jan
>>> On 08.05.19 at 15:24, <paul.durrant@citrix.com> wrote: > --- a/xen/drivers/passthrough/pci.c > +++ b/xen/drivers/passthrough/pci.c > @@ -427,6 +427,7 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, > u8 bus, u8 devfn) > > check_pdev(pdev); > apply_quirks(pdev); > + iommu_group_assign(pdev); > > return pdev; > } > @@ -1098,6 +1099,8 @@ int __init scan_pci_devices(void) > { > int ret; > > + iommu_groups_init(); > + > pcidevs_lock(); > ret = pci_segments_iterate(_scan_pci_devices, NULL); > pcidevs_unlock(); In free_pdev() you also want to remove a device from its group. Jan
> -----Original Message----- > From: Roger Pau Monne > Sent: 15 May 2019 09:45 > To: Paul Durrant <Paul.Durrant@citrix.com> > Cc: xen-devel@lists.xenproject.org; Stefano Stabellini <sstabellini@kernel.org>; Wei Liu > <wei.liu2@citrix.com>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; George Dunlap > <George.Dunlap@citrix.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Ian Jackson > <Ian.Jackson@citrix.com>; Tim (Xen.org) <tim@xen.org>; Julien Grall <julien.grall@arm.com>; Jan > Beulich <jbeulich@suse.com> > Subject: Re: [Xen-devel] [PATCH 4/5] iommu: introduce iommu_groups > > On Wed, May 08, 2019 at 02:24:02PM +0100, Paul Durrant wrote: > > Some devices may share a single PCIe initiator id, e.g. if they are actually > > legacy PCI devices behind a bridge, and hence DMA from such devices will > > be subject to the same address translation in the IOMMU. Hence these devices > > should be treated as a unit for the purposes of assignment. There are also > > other reasons why multiple devices should be treated as a unit, e.g. those > > subject to a shared RMRR or those downstream of a bridge that does not > > support ACS. > > > > This patch introduces a new struct iommu_group to act as a container for > > devices that should be treated as a unit, and builds a list of them as > > PCI devices are scanned. The iommu_ops already implement a method, > > get_device_group_id(), that is seemingly intended to return the initiator > > id for a given SBDF so use this as the mechanism for group assignment in > > the first instance. Assignment based on shared RMRR or lack of ACS will be > > dealt with in subsequent patches, as will modifications to the device > > assignment code. > > > > Signed-off-by: Paul Durrant <paul.durrant@citrix.com> > > --- > > Cc: Jan Beulich <jbeulich@suse.com> > > Cc: Andrew Cooper <andrew.cooper3@citrix.com> > > Cc: George Dunlap <George.Dunlap@eu.citrix.com> > > Cc: Ian Jackson <ian.jackson@eu.citrix.com> > > Cc: Julien Grall <julien.grall@arm.com> > > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> > > Cc: Stefano Stabellini <sstabellini@kernel.org> > > Cc: Tim Deegan <tim@xen.org> > > Cc: Wei Liu <wei.liu2@citrix.com> > > --- > > xen/drivers/passthrough/iommu.c | 76 +++++++++++++++++++++++++++++++++++++++++ > > xen/drivers/passthrough/pci.c | 3 ++ > > xen/include/xen/iommu.h | 7 ++++ > > xen/include/xen/pci.h | 3 ++ > > 4 files changed, 89 insertions(+) > > > > diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c > > index d3a6199b77..11319fbaae 100644 > > --- a/xen/drivers/passthrough/iommu.c > > +++ b/xen/drivers/passthrough/iommu.c > > @@ -655,6 +655,82 @@ static void iommu_dump_p2m_table(unsigned char key) > > } > > } > > > > +#ifdef CONFIG_HAS_PCI > > + > > +struct iommu_group { > > + unsigned int id; > > + unsigned int index; > > I'm not sure I see the point of the index field, isn't it enough to > just use the ID field? The index is just supposed to just be an index to refer to the group. Linux has similar, and this is what ends up in sysfs, but I guess there's not much point having this in Xen as yet... It can always be added later if it proves desirable. > > The ID is already used as a unique key in the code below for the radix > tree operations. > Yes, that's the meaningful number. > > + struct list_head devs_list; > > +}; > > + > > +static struct radix_tree_root iommu_groups; > > + > > +void __init iommu_groups_init(void) > > +{ > > + radix_tree_init(&iommu_groups); > > +} > > + > > +static struct iommu_group *alloc_iommu_group(unsigned int id) > > +{ > > + struct iommu_group *grp; > > + static unsigned int index; > > + > > + grp = xzalloc(struct iommu_group); > > Can be moved with the declaration above. > Sure. > > + if ( !grp ) > > + return NULL; > > + > > + grp->id = id; > > + grp->index = index++; > > AFAICT none of this is subject to races because it's always protected > by the pcidevs lock? > Yes, it's under lock. > > + INIT_LIST_HEAD(&grp->devs_list); > > + > > + if ( radix_tree_insert(&iommu_groups, id, grp) ) > > + { > > + xfree(grp); > > + grp = NULL; > > Do you need to decrease index here, or is it fine to burn numbers on > failure? > I'll get rid of the index. > > + } > > + > > + return grp; > > +} > > + > > +static struct iommu_group *get_iommu_group(unsigned int id) > > +{ > > + struct iommu_group *grp = radix_tree_lookup(&iommu_groups, id); > > + > > + if ( !grp ) > > + grp = alloc_iommu_group(id); > > + > > + return grp; > > +} > > + > > +int iommu_group_assign(struct pci_dev *pdev) > > +{ > > + const struct iommu_ops *ops; > > + unsigned int id; > > + struct iommu_group *grp; > > + > > + ops = iommu_get_ops(); > > This initialization can be done at declaration time. > Sure. > > + if ( !ops || !ops->get_device_group_id ) > > + return 0; > > + > > + id = ops->get_device_group_id(pdev->seg, pdev->bus, pdev->devfn); > > + grp = get_iommu_group(id); > > + > > + if ( ! grp ) > ^ extra space > > + return -ENOMEM; > > + > > + if ( iommu_verbose ) > > + printk(XENLOG_INFO "Assign %04x:%02x:%02x.%u -> IOMMU group %u\n", > > + pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn), > > + PCI_FUNC(pdev->devfn), grp->index); > > Wouldn't it be more helpful to print the group ID rather than the Xen > internal index? Yes, once the index is gone I'll do that instead. Paul > > Thanks, Roger.
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: 15 May 2019 15:18 > To: Paul Durrant <Paul.Durrant@citrix.com> > Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu > <wei.liu2@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; > Stefano Stabellini <sstabellini@kernel.org>; xen-devel <xen-devel@lists.xenproject.org>; Konrad > Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org> > Subject: Re: [PATCH 4/5] iommu: introduce iommu_groups > > >>> On 08.05.19 at 15:24, <paul.durrant@citrix.com> wrote: > > --- a/xen/drivers/passthrough/iommu.c > > +++ b/xen/drivers/passthrough/iommu.c > > @@ -655,6 +655,82 @@ static void iommu_dump_p2m_table(unsigned char key) > > } > > } > > > > +#ifdef CONFIG_HAS_PCI > > + > > +struct iommu_group { > > + unsigned int id; > > + unsigned int index; > > + struct list_head devs_list; > > +}; > > Could these additions as a whole go into a new groups.c? Sure. > > > +int iommu_group_assign(struct pci_dev *pdev) > > +{ > > + const struct iommu_ops *ops; > > + unsigned int id; > > + struct iommu_group *grp; > > + > > + ops = iommu_get_ops(); > > + if ( !ops || !ops->get_device_group_id ) > > The way iommu_get_ops() works the left side of the || is pointless. Yes, this was a hangover from a previous variant of patch #3, which I'm going to drop anyway. > > > + return 0; > > + > > + id = ops->get_device_group_id(pdev->seg, pdev->bus, pdev->devfn); > > + grp = get_iommu_group(id); > > I don't think solitary devices should be allocated a group. Also > you don't handle failure of ops->get_device_group_id(). True, it can fail in the VT-d case. Not clear what to do in that case though; I guess assume - for now - that the device gets its own group. I think all devices should get a group. The group will ultimately be the unit of assignment to a VM and, in the best case, we *expect* each device to have its own group... it's only when there are quirks, legacy bridges etc. that multiple devices should end up in the same group. This is consistent with Linux's IOMMU groups. > > > + if ( ! grp ) > > Nit: Stray blank. Oh yes. > > > --- a/xen/include/xen/pci.h > > +++ b/xen/include/xen/pci.h > > @@ -75,6 +75,9 @@ struct pci_dev { > > struct list_head alldevs_list; > > struct list_head domain_list; > > > > + struct list_head grpdevs_list; > > Does this separate list provide much value? The devices in a group > are going to move between two domain_list-s all in one go, so > once you know the domain of one you'll be able to find the rest by > iterating that domain's list. Is the fear that such an iteration may > be tens of thousands of entries long, and hence become an issue > when traversed? I have no idea how many PCI devices the biggest > systems today would have, but if traversal was an issue, then it > would already be with the code we've got now. I'd prefer to keep it... It makes the re-implementation of the domctl in the next patch more straightforward. Paul > > Jan >
>>> On 31.05.19 at 15:55, <Paul.Durrant@citrix.com> wrote: >> From: Jan Beulich [mailto:JBeulich@suse.com] >> Sent: 15 May 2019 15:18 >> >> >>> On 08.05.19 at 15:24, <paul.durrant@citrix.com> wrote: >> > + id = ops->get_device_group_id(pdev->seg, pdev->bus, pdev->devfn); >> > + grp = get_iommu_group(id); >> >> I don't think solitary devices should be allocated a group. Also >> you don't handle failure of ops->get_device_group_id(). > > True, it can fail in the VT-d case. Not clear what to do in that case though; > I guess assume - for now - that the device gets its own group. > I think all devices should get a group. The group will ultimately be the > unit of assignment to a VM and, in the best case, we *expect* each device to > have its own group... it's only when there are quirks, legacy bridges etc. > that multiple devices should end up in the same group. This is consistent > with Linux's IOMMU groups. Well, I'm not worried much about consistency with Linux here, as you're not cloning their implementation anyway (afaict). To me at this point wrapping individual devices in groups looks like just extra overhead with no real gain. But, granted, the gain may appear later. >> > --- a/xen/include/xen/pci.h >> > +++ b/xen/include/xen/pci.h >> > @@ -75,6 +75,9 @@ struct pci_dev { >> > struct list_head alldevs_list; >> > struct list_head domain_list; >> > >> > + struct list_head grpdevs_list; >> >> Does this separate list provide much value? The devices in a group >> are going to move between two domain_list-s all in one go, so >> once you know the domain of one you'll be able to find the rest by >> iterating that domain's list. Is the fear that such an iteration may >> be tens of thousands of entries long, and hence become an issue >> when traversed? I have no idea how many PCI devices the biggest >> systems today would have, but if traversal was an issue, then it >> would already be with the code we've got now. > > I'd prefer to keep it... It makes the re-implementation of the domctl in the > next patch more straightforward. I can accept this as the positive side. But there's extra storage needed (not much, but anyway), and the more (independent) lists we have that devices can be on, the more likely it'll be that one of them gets screwed up at some point (e.g. by forgetting to remove a device from one of them prior to de-allocation). Jan
> -----Original Message----- [snip] > >> > --- a/xen/include/xen/pci.h > >> > +++ b/xen/include/xen/pci.h > >> > @@ -75,6 +75,9 @@ struct pci_dev { > >> > struct list_head alldevs_list; > >> > struct list_head domain_list; > >> > > >> > + struct list_head grpdevs_list; > >> > >> Does this separate list provide much value? The devices in a group > >> are going to move between two domain_list-s all in one go, so > >> once you know the domain of one you'll be able to find the rest by > >> iterating that domain's list. Is the fear that such an iteration may > >> be tens of thousands of entries long, and hence become an issue > >> when traversed? I have no idea how many PCI devices the biggest > >> systems today would have, but if traversal was an issue, then it > >> would already be with the code we've got now. > > > > I'd prefer to keep it... It makes the re-implementation of the domctl in the > > next patch more straightforward. > > I can accept this as the positive side. But there's extra storage > needed (not much, but anyway), and the more (independent) > lists we have that devices can be on, the more likely it'll be that > one of them gets screwed up at some point (e.g. by forgetting > to remove a device from one of them prior to de-allocation). Ok, I'll drop the list and just match on the grp pointer. Paul > > Jan >
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c index d3a6199b77..11319fbaae 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -655,6 +655,82 @@ static void iommu_dump_p2m_table(unsigned char key) } } +#ifdef CONFIG_HAS_PCI + +struct iommu_group { + unsigned int id; + unsigned int index; + struct list_head devs_list; +}; + +static struct radix_tree_root iommu_groups; + +void __init iommu_groups_init(void) +{ + radix_tree_init(&iommu_groups); +} + +static struct iommu_group *alloc_iommu_group(unsigned int id) +{ + struct iommu_group *grp; + static unsigned int index; + + grp = xzalloc(struct iommu_group); + if ( !grp ) + return NULL; + + grp->id = id; + grp->index = index++; + INIT_LIST_HEAD(&grp->devs_list); + + if ( radix_tree_insert(&iommu_groups, id, grp) ) + { + xfree(grp); + grp = NULL; + } + + return grp; +} + +static struct iommu_group *get_iommu_group(unsigned int id) +{ + struct iommu_group *grp = radix_tree_lookup(&iommu_groups, id); + + if ( !grp ) + grp = alloc_iommu_group(id); + + return grp; +} + +int iommu_group_assign(struct pci_dev *pdev) +{ + const struct iommu_ops *ops; + unsigned int id; + struct iommu_group *grp; + + ops = iommu_get_ops(); + if ( !ops || !ops->get_device_group_id ) + return 0; + + id = ops->get_device_group_id(pdev->seg, pdev->bus, pdev->devfn); + grp = get_iommu_group(id); + + if ( ! grp ) + return -ENOMEM; + + if ( iommu_verbose ) + printk(XENLOG_INFO "Assign %04x:%02x:%02x.%u -> IOMMU group %u\n", + pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn), + PCI_FUNC(pdev->devfn), grp->index); + + list_add(&pdev->grpdevs_list, &grp->devs_list); + pdev->grp = grp; + + return 0; +} + +#endif /* CONFIG_HAS_PCI */ + /* * Local variables: * mode: C diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index 8108ed5f9a..6210409741 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -427,6 +427,7 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn) check_pdev(pdev); apply_quirks(pdev); + iommu_group_assign(pdev); return pdev; } @@ -1098,6 +1099,8 @@ int __init scan_pci_devices(void) { int ret; + iommu_groups_init(); + pcidevs_lock(); ret = pci_segments_iterate(_scan_pci_devices, NULL); pcidevs_unlock(); diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index b2d429a6ef..6d6aa12314 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -315,6 +315,13 @@ DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb); extern struct spinlock iommu_pt_cleanup_lock; extern struct page_list_head iommu_pt_cleanup_list; +#ifdef CONFIG_HAS_PCI + +void iommu_groups_init(void); +int iommu_group_assign(struct pci_dev *pdev); + +#endif /* CONFIG_HAS_PCI */ + #endif /* _IOMMU_H_ */ /* diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h index 8b21e8dc84..5fe7525db6 100644 --- a/xen/include/xen/pci.h +++ b/xen/include/xen/pci.h @@ -75,6 +75,9 @@ struct pci_dev { struct list_head alldevs_list; struct list_head domain_list; + struct list_head grpdevs_list; + struct iommu_group *grp; + struct list_head msi_list; struct arch_msix *msix;
Some devices may share a single PCIe initiator id, e.g. if they are actually legacy PCI devices behind a bridge, and hence DMA from such devices will be subject to the same address translation in the IOMMU. Hence these devices should be treated as a unit for the purposes of assignment. There are also other reasons why multiple devices should be treated as a unit, e.g. those subject to a shared RMRR or those downstream of a bridge that does not support ACS. This patch introduces a new struct iommu_group to act as a container for devices that should be treated as a unit, and builds a list of them as PCI devices are scanned. The iommu_ops already implement a method, get_device_group_id(), that is seemingly intended to return the initiator id for a given SBDF so use this as the mechanism for group assignment in the first instance. Assignment based on shared RMRR or lack of ACS will be dealt with in subsequent patches, as will modifications to the device assignment code. Signed-off-by: Paul Durrant <paul.durrant@citrix.com> --- Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: George Dunlap <George.Dunlap@eu.citrix.com> Cc: Ian Jackson <ian.jackson@eu.citrix.com> Cc: Julien Grall <julien.grall@arm.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Stefano Stabellini <sstabellini@kernel.org> Cc: Tim Deegan <tim@xen.org> Cc: Wei Liu <wei.liu2@citrix.com> --- xen/drivers/passthrough/iommu.c | 76 +++++++++++++++++++++++++++++++++++++++++ xen/drivers/passthrough/pci.c | 3 ++ xen/include/xen/iommu.h | 7 ++++ xen/include/xen/pci.h | 3 ++ 4 files changed, 89 insertions(+)