diff mbox series

[v3,14/15] iommufd: vfio container FD ioctl compatibility

Message ID 14-v3-402a7d6459de+24b-iommufd_jgg@nvidia.com (mailing list archive)
State Not Applicable
Delegated to: Netdev Maintainers
Headers show
Series IOMMUFD Generic interface | expand

Checks

Context Check Description
netdev/tree_selection success Not a local patch

Commit Message

Jason Gunthorpe Oct. 25, 2022, 6:12 p.m. UTC
iommufd can directly implement the /dev/vfio/vfio container IOCTLs by
mapping them into io_pagetable operations.

A userspace application can test against iommufd and confirm compatibility
then simply make a small change to open /dev/iommu instead of
/dev/vfio/vfio.

For testing purposes /dev/vfio/vfio can be symlinked to /dev/iommu and
then all applications will use the compatibility path with no code
changes. A later series allows /dev/vfio/vfio to be directly provided by
iommufd, which allows the rlimit mode to work the same as well.

This series just provides the iommufd side of compatibility. Actually
linking this to VFIO_SET_CONTAINER is a followup series, with a link in
the cover letter.

Internally the compatibility API uses a normal IOAS object that, like
vfio, is automatically allocated when the first device is
attached.

Userspace can also query or set this IOAS object directly using the
IOMMU_VFIO_IOAS ioctl. This allows mixing and matching new iommufd only
features while still using the VFIO style map/unmap ioctls.

While this is enough to operate qemu, it has a few differences:

 - Resource limits rely on memory cgroups to bound what userspace can do
   instead of the module parameter dma_entry_limit.

 - VFIO P2P is not implemented. The DMABUF patches for vfio are a start at
   a solution where iommufd would import a special DMABUF. This is to avoid
   further propogating the follow_pfn() security problem.

 - A full audit for pedantic compatibility details (eg errnos, etc) has
   not yet been done

 - powerpc SPAPR is left out, as it is not connected to the iommu_domain
   framework. It seems interest in SPAPR is minimal as it is currently
   non-working in v6.1-rc1. They will have to convert to the iommu
   subsystem framework to enjoy iommfd.

The following are not going to be implemented and we expect to remove them
from VFIO type1:

 - SW access 'dirty tracking'. As discussed in the cover letter this will
   be done in VFIO.

 - VFIO_TYPE1_NESTING_IOMMU
    https://lore.kernel.org/all/0-v1-0093c9b0e345+19-vfio_no_nesting_jgg@nvidia.com/

 - VFIO_DMA_MAP_FLAG_VADDR
    https://lore.kernel.org/all/Yz777bJZjTyLrHEQ@nvidia.com/

Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommufd/Makefile          |   3 +-
 drivers/iommu/iommufd/iommufd_private.h |   6 +
 drivers/iommu/iommufd/main.c            |  16 +-
 drivers/iommu/iommufd/vfio_compat.c     | 443 ++++++++++++++++++++++++
 include/linux/iommufd.h                 |   7 +
 include/uapi/linux/iommufd.h            |  36 ++
 6 files changed, 505 insertions(+), 6 deletions(-)
 create mode 100644 drivers/iommu/iommufd/vfio_compat.c

Comments

Jason Gunthorpe Oct. 27, 2022, 2:12 p.m. UTC | #1
> +	down_read(&ioas->iopt.iova_rwsem);
> +	info.flags = VFIO_IOMMU_INFO_PGSIZES;
> +	info.iova_pgsizes = iommufd_get_pagesizes(ioas);
> +	info.cap_offset = 0;

The iommufd_get_pagesizes() obtains the domains_rwsem and cannot be
called under the iova_rwsem due to lock odering.

The test suite already covers this, but it turns out my test
environment had lockdep disabled since it hits a Intel iommu lockdep
splat on boot starting in v6.1-rc1 :\ Syzkaller found it because it
runs the VM with different options and avoids the boot splat.

@@ -371,11 +371,11 @@ static int iommufd_vfio_iommu_get_info(struct iommufd_ctx *ictx,
        if (IS_ERR(ioas))
                return PTR_ERR(ioas);
 
-       down_read(&ioas->iopt.iova_rwsem);
        info.flags = VFIO_IOMMU_INFO_PGSIZES;
        info.iova_pgsizes = iommufd_get_pagesizes(ioas);
        info.cap_offset = 0;
 
+       down_read(&ioas->iopt.iova_rwsem);
        total_cap_size = sizeof(info);
        for (i = 0; i != ARRAY_SIZE(fill_fns); i++) {
                int cap_size;

Jason
Nicolin Chen Nov. 1, 2022, 7:45 p.m. UTC | #2
On Tue, Oct 25, 2022 at 03:12:23PM -0300, Jason Gunthorpe wrote:

> +static int iommufd_vfio_iommu_get_info(struct iommufd_ctx *ictx,
> +				       void __user *arg)

> +	if (copy_to_user(arg, &info, minsz))
> +		rc = -EFAULT;
> +	rc = 0;

Coverity reports a value overwriting here:
rc gets -EFAULT first then gets overwritten to 0.

> +
> +out_put:
> +	up_read(&ioas->iopt.iova_rwsem);
> +	iommufd_put_object(&ioas->obj);
> +	return rc;
> +}
Jason Gunthorpe Nov. 2, 2022, 1:15 p.m. UTC | #3
On Tue, Nov 01, 2022 at 12:45:01PM -0700, Nicolin Chen wrote:
> On Tue, Oct 25, 2022 at 03:12:23PM -0300, Jason Gunthorpe wrote:
> 
> > +static int iommufd_vfio_iommu_get_info(struct iommufd_ctx *ictx,
> > +				       void __user *arg)
> 
> > +	if (copy_to_user(arg, &info, minsz))
> > +		rc = -EFAULT;
> > +	rc = 0;
> 
> Coverity reports a value overwriting here:
> rc gets -EFAULT first then gets overwritten to 0.

Indeed, it should be

        info.cap_offset = sizeof(info);
        info.argsz = total_cap_size;
        info.flags |= VFIO_IOMMU_INFO_CAPS;
-       if (copy_to_user(arg, &info, minsz))
+       if (copy_to_user(arg, &info, minsz)) {
                rc = -EFAULT;
+               goto out_put;
+       }
        rc = 0;

Jason
Baolu Lu Nov. 5, 2022, 12:07 a.m. UTC | #4
On 2022/10/26 2:12, Jason Gunthorpe wrote:
> +static int iommufd_fill_cap_iova(struct iommufd_ioas *ioas,
> +				 struct vfio_info_cap_header __user *cur,
> +				 size_t avail)
> +{
> +	struct vfio_iommu_type1_info_cap_iova_range __user *ucap_iovas =
> +		container_of(cur,
> +			     struct vfio_iommu_type1_info_cap_iova_range __user,
> +			     header);
> +	struct vfio_iommu_type1_info_cap_iova_range cap_iovas = {
> +		.header = {
> +			.id = VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE,
> +			.version = 1,
> +		},
> +	};
> +	struct interval_tree_span_iter span;

Intel 0day robot reported:

commit: 954c5e0297d664c7f46c628b3151567b53afe153 [14/15] iommufd: vfio 
container FD ioctl compatibility
config: m68k-randconfig-s053-20221104 (attached as .config)
compiler: m68k-linux-gcc (GCC) 12.1.0
reproduce:
         wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross 
-O ~/bin/make.cross
         chmod +x ~/bin/make.cross
         # apt-get install sparse
         # sparse version: v0.6.4-39-gce1a6720-dirty
         git remote add internal-blu2-usb 
git://bee.sh.intel.com/git/blu2/usb.git
         git fetch --no-tags internal-blu2-usb iommu/iommufd/v3
         git checkout 954c5e0297d664c7f46c628b3151567b53afe153
         # save the config file
         mkdir build_dir && cp config build_dir/.config
         COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross 
C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=m68k 
SHELL=/bin/bash drivers/iommu/iommufd/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

sparse warnings: (new ones prefixed by >>)
 >> drivers/iommu/iommufd/vfio_compat.c:294:17: sparse: sparse: cast 
removes address space '__user' of expression

vim +/__user +294 drivers/iommu/iommufd/vfio_compat.c

    288	
    289	static int iommufd_fill_cap_iova(struct iommufd_ioas *ioas,
    290					 struct vfio_info_cap_header __user *cur,
    291					 size_t avail)
    292	{
    293		struct vfio_iommu_type1_info_cap_iova_range __user *ucap_iovas =
  > 294			container_of(cur,
    295				     struct vfio_iommu_type1_info_cap_iova_range __user,
    296				     header);
    297		struct vfio_iommu_type1_info_cap_iova_range cap_iovas = {
    298			.header = {
    299				.id = VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE,
    300				.version = 1,
    301			},
    302		};
    303		struct interval_tree_span_iter span;
    304	
    305		interval_tree_for_each_span(&span, &ioas->iopt.reserved_itree, 0,
    306					    ULONG_MAX) {
    307			struct vfio_iova_range range;
    308	
    309			if (!span.is_hole)
    310				continue;
    311			range.start = span.start_hole;
    312			range.end = span.last_hole;
    313			if (avail >= struct_size(&cap_iovas, iova_ranges,
    314						 cap_iovas.nr_iovas + 1) &&
    315			    copy_to_user(&ucap_iovas->iova_ranges[cap_iovas.nr_iovas],
    316					 &range, sizeof(range)))
    317				return -EFAULT;
    318			cap_iovas.nr_iovas++;
    319		}
    320		if (avail >= struct_size(&cap_iovas, iova_ranges, 
cap_iovas.nr_iovas) &&
    321		    copy_to_user(ucap_iovas, &cap_iovas, sizeof(cap_iovas)))
    322			return -EFAULT;
    323		return struct_size(&cap_iovas, iova_ranges, cap_iovas.nr_iovas);
    324	}
    325	

Best regards,
baolu
Tian, Kevin Nov. 5, 2022, 9:31 a.m. UTC | #5
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, October 26, 2022 2:12 AM
>
> +int iommufd_vfio_compat_ioas_id(struct iommufd_ctx *ictx, u32
> *out_ioas_id)
> +{
> +	struct iommufd_ioas *ioas = NULL;
> +	struct iommufd_ioas *out_ioas;
> +
> +	ioas = iommufd_ioas_alloc(ictx);
> +	if (IS_ERR(ioas))
> +		return PTR_ERR(ioas);

I tried to find out where the auto-created compat_ioas is destroyed.

Is my understanding correct that nobody holds a long-term users
count on it then we expect it to be destroyed in iommufd release?

If yes, probably worth adding a comment to explain this behavior.

> +
> +	case IOMMU_VFIO_IOAS_SET:
> +		ioas = iommufd_get_ioas(ucmd, cmd->ioas_id);
> +		if (IS_ERR(ioas))
> +			return PTR_ERR(ioas);
> +		xa_lock(&ucmd->ictx->objects);
> +		ucmd->ictx->vfio_ioas = ioas;
> +		xa_unlock(&ucmd->ictx->objects);
> +		iommufd_put_object(&ioas->obj);
> +		return 0;

disallow changing vfio_ioas when it's already in-use e.g. has
a list of hwpt attached?
Jason Gunthorpe Nov. 7, 2022, 2:23 p.m. UTC | #6
On Sat, Nov 05, 2022 at 08:07:24AM +0800, Baolu Lu wrote:

> If you fix the issue, kindly add following tag where applicable
> | Reported-by: kernel test robot <lkp@intel.com>
> 
> sparse warnings: (new ones prefixed by >>)
> >> drivers/iommu/iommufd/vfio_compat.c:294:17: sparse: sparse: cast removes
> address space '__user' of expression
> 
> vim +/__user +294 drivers/iommu/iommufd/vfio_compat.c
> 
>    288	
>    289	static int iommufd_fill_cap_iova(struct iommufd_ioas *ioas,
>    290					 struct vfio_info_cap_header __user *cur,
>    291					 size_t avail)
>    292	{
>    293		struct vfio_iommu_type1_info_cap_iova_range __user *ucap_iovas =
>  > 294			container_of(cur,
>    295				     struct vfio_iommu_type1_info_cap_iova_range __user,
>    296				     header);

I think this is correct as-written. Sparse is flagging the casting
inside container_of. I don't think ti is worth trying to fix

Jason
Jason Gunthorpe Nov. 7, 2022, 5:08 p.m. UTC | #7
On Sat, Nov 05, 2022 at 09:31:39AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Wednesday, October 26, 2022 2:12 AM
> >
> > +int iommufd_vfio_compat_ioas_id(struct iommufd_ctx *ictx, u32
> > *out_ioas_id)
> > +{
> > +	struct iommufd_ioas *ioas = NULL;
> > +	struct iommufd_ioas *out_ioas;
> > +
> > +	ioas = iommufd_ioas_alloc(ictx);
> > +	if (IS_ERR(ioas))
> > +		return PTR_ERR(ioas);
> 
> I tried to find out where the auto-created compat_ioas is destroyed.
> 
> Is my understanding correct that nobody holds a long-term users
> count on it then we expect it to be destroyed in iommufd release?

This is creating a userspace owned ID, like every other IOAS.

Userspace can obtain the ID using IOMMU_VFIO_IOAS GET and destroy it,
if it wants. We keep track in a later hunk:

+	if (ictx->vfio_ioas && &ictx->vfio_ioas->obj == obj)
+		ictx->vfio_ioas = NULL;

As with all userspace owned IDs they are always freed during iommufd
release.

So, a comment is:

	/*
	 * An automatically created compat IOAS is treated as a userspace
	 * created object. Userspace can learn the ID via IOMMU_VFIO_IOAS_GET,
	 * and if not manually destroyed it will be destroyed automatically
	 * at iommufd release.
	 */

> > +	case IOMMU_VFIO_IOAS_SET:
> > +		ioas = iommufd_get_ioas(ucmd, cmd->ioas_id);
> > +		if (IS_ERR(ioas))
> > +			return PTR_ERR(ioas);
> > +		xa_lock(&ucmd->ictx->objects);
> > +		ucmd->ictx->vfio_ioas = ioas;
> > +		xa_unlock(&ucmd->ictx->objects);
> > +		iommufd_put_object(&ioas->obj);
> > +		return 0;
> 
> disallow changing vfio_ioas when it's already in-use e.g. has
> a list of hwpt attached?

I don't see a reason to do so..

The semantic we have is the IOAS, whatever it is, is fixed once the
device or access object is created. In VFIO sense that means it
becomes locked to the IOAS that was set as compat when the vfio device
is bound.

Other than that, userspace can change the IOAS it wants freely, there
is no harm to the kernel and it may even be useful.

Jason
Tian, Kevin Nov. 7, 2022, 11:53 p.m. UTC | #8
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Tuesday, November 8, 2022 1:08 AM
> 
> On Sat, Nov 05, 2022 at 09:31:39AM +0000, Tian, Kevin wrote:
> > > From: Jason Gunthorpe <jgg@nvidia.com>
> > > Sent: Wednesday, October 26, 2022 2:12 AM
> > >
> > > +int iommufd_vfio_compat_ioas_id(struct iommufd_ctx *ictx, u32
> > > *out_ioas_id)
> > > +{
> > > +	struct iommufd_ioas *ioas = NULL;
> > > +	struct iommufd_ioas *out_ioas;
> > > +
> > > +	ioas = iommufd_ioas_alloc(ictx);
> > > +	if (IS_ERR(ioas))
> > > +		return PTR_ERR(ioas);
> >
> > I tried to find out where the auto-created compat_ioas is destroyed.
> >
> > Is my understanding correct that nobody holds a long-term users
> > count on it then we expect it to be destroyed in iommufd release?
> 
> This is creating a userspace owned ID, like every other IOAS.
> 
> Userspace can obtain the ID using IOMMU_VFIO_IOAS GET and destroy it,
> if it wants. We keep track in a later hunk:
> 
> +	if (ictx->vfio_ioas && &ictx->vfio_ioas->obj == obj)
> +		ictx->vfio_ioas = NULL;
> 
> As with all userspace owned IDs they are always freed during iommufd
> release.
> 
> So, a comment is:
> 
> 	/*
> 	 * An automatically created compat IOAS is treated as a userspace
> 	 * created object. Userspace can learn the ID via
> IOMMU_VFIO_IOAS_GET,
> 	 * and if not manually destroyed it will be destroyed automatically
> 	 * at iommufd release.
> 	 */

this is clear

> 
> > > +	case IOMMU_VFIO_IOAS_SET:
> > > +		ioas = iommufd_get_ioas(ucmd, cmd->ioas_id);
> > > +		if (IS_ERR(ioas))
> > > +			return PTR_ERR(ioas);
> > > +		xa_lock(&ucmd->ictx->objects);
> > > +		ucmd->ictx->vfio_ioas = ioas;
> > > +		xa_unlock(&ucmd->ictx->objects);
> > > +		iommufd_put_object(&ioas->obj);
> > > +		return 0;
> >
> > disallow changing vfio_ioas when it's already in-use e.g. has
> > a list of hwpt attached?
> 
> I don't see a reason to do so..
> 
> The semantic we have is the IOAS, whatever it is, is fixed once the
> device or access object is created. In VFIO sense that means it
> becomes locked to the IOAS that was set as compat when the vfio device
> is bound.
> 
> Other than that, userspace can change the IOAS it wants freely, there
> is no harm to the kernel and it may even be useful.
> 

it allows devices SET_CONTAINER to an same iommufd attached to different
IOAS's if IOAS_SET comes in the middle. Is it desired?

while it's flexible I don't see a real usage would use it. Instead it causes
conceptual confusion to the original vfio semantics.
Jason Gunthorpe Nov. 8, 2022, 12:09 a.m. UTC | #9
On Mon, Nov 07, 2022 at 11:53:24PM +0000, Tian, Kevin wrote:
> > Other than that, userspace can change the IOAS it wants freely, there
> > is no harm to the kernel and it may even be useful.
> 
> it allows devices SET_CONTAINER to an same iommufd attached to different
> IOAS's if IOAS_SET comes in the middle. Is it desired?

Sure, if userspace does crazy things then userspace gets to keep all
the pieces - it doesn't harm the kernel.

The IOAS is bound during get_device, and that is one of the key
conceptual things we changed with iommufd.

Jason
Tian, Kevin Nov. 8, 2022, 12:13 a.m. UTC | #10
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Tuesday, November 8, 2022 8:10 AM
> 
> On Mon, Nov 07, 2022 at 11:53:24PM +0000, Tian, Kevin wrote:
> > > Other than that, userspace can change the IOAS it wants freely, there
> > > is no harm to the kernel and it may even be useful.
> >
> > it allows devices SET_CONTAINER to an same iommufd attached to
> different
> > IOAS's if IOAS_SET comes in the middle. Is it desired?
> 
> Sure, if userspace does crazy things then userspace gets to keep all
> the pieces - it doesn't harm the kernel.
> 
> The IOAS is bound during get_device, and that is one of the key
> conceptual things we changed with iommufd.
> 

what we changed is the timing of claiming DMA ownership (from
SET_CONTAINER to GET_DEVICE_FD). this is fine.

But adding an interface to allow the conceptual 'container' tied to
multiple IOAS's is kind of overengineering IMHO.

yes no hard to the kernel but also no benefit to user. It's simpler
to constrain the compat layer based on what vfio legacy has today
and then position all new/fancy usages with the new cdev.
Jason Gunthorpe Nov. 8, 2022, 12:17 a.m. UTC | #11
On Tue, Nov 08, 2022 at 12:13:55AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Tuesday, November 8, 2022 8:10 AM
> > 
> > On Mon, Nov 07, 2022 at 11:53:24PM +0000, Tian, Kevin wrote:
> > > > Other than that, userspace can change the IOAS it wants freely, there
> > > > is no harm to the kernel and it may even be useful.
> > >
> > > it allows devices SET_CONTAINER to an same iommufd attached to
> > different
> > > IOAS's if IOAS_SET comes in the middle. Is it desired?
> > 
> > Sure, if userspace does crazy things then userspace gets to keep all
> > the pieces - it doesn't harm the kernel.
> > 
> > The IOAS is bound during get_device, and that is one of the key
> > conceptual things we changed with iommufd.
> > 
> 
> what we changed is the timing of claiming DMA ownership (from
> SET_CONTAINER to GET_DEVICE_FD). this is fine.
> 
> But adding an interface to allow the conceptual 'container' tied to
> multiple IOAS's is kind of overengineering IMHO.

It is not engineered to do this, it just simply happens because we
don't have the concept of a 'container' at all - and the emulation of
that same concept is very simple and doesn't try to block cases that
vfio would have not permitted.

> yes no hard to the kernel but also no benefit to user. It's simpler
> to constrain the compat layer based on what vfio legacy has today
> and then position all new/fancy usages with the new cdev.

That isn't simpler at all, it is whole bunch more code to track these
things and decide when we are allowed to execute an ioctl that doesn't
even exist in the original interface in the first place. Right now we
don't even have APIs to trace when set_container/unset_container have
happened in vfio to even constrain this.

I don't see the issue, the ioctl that manipulates the compat ioas is
new, it is optional, and if userspace mis-uses it without understand
how it conceptually works then userspace just won't do what is
expected.

Jason
diff mbox series

Patch

diff --git a/drivers/iommu/iommufd/Makefile b/drivers/iommu/iommufd/Makefile
index ca28a135b9675f..2fdff04000b326 100644
--- a/drivers/iommu/iommufd/Makefile
+++ b/drivers/iommu/iommufd/Makefile
@@ -5,6 +5,7 @@  iommufd-y := \
 	io_pagetable.o \
 	ioas.o \
 	main.o \
-	pages.o
+	pages.o \
+	vfio_compat.o
 
 obj-$(CONFIG_IOMMUFD) += iommufd.o
diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h
index 5be8983b8524e2..87fd127ca5843a 100644
--- a/drivers/iommu/iommufd/iommufd_private.h
+++ b/drivers/iommu/iommufd/iommufd_private.h
@@ -18,6 +18,7 @@  struct iommufd_ctx {
 	struct xarray objects;
 
 	u8 account_mode;
+	struct iommufd_ioas *vfio_ioas;
 };
 
 /*
@@ -91,6 +92,9 @@  struct iommufd_ucmd {
 	void *cmd;
 };
 
+int iommufd_vfio_ioctl(struct iommufd_ctx *ictx, unsigned int cmd,
+		       unsigned long arg);
+
 /* Copy the response in ucmd->cmd back to userspace. */
 static inline int iommufd_ucmd_respond(struct iommufd_ucmd *ucmd,
 				       size_t cmd_len)
@@ -221,6 +225,8 @@  int iommufd_ioas_option(struct iommufd_ucmd *ucmd);
 int iommufd_option_rlimit_mode(struct iommu_option *cmd,
 			       struct iommufd_ctx *ictx);
 
+int iommufd_vfio_ioas(struct iommufd_ucmd *ucmd);
+
 /*
  * A HW pagetable is called an iommu_domain inside the kernel. This user object
  * allows directly creating and inspecting the domains. Domains that have kernel
diff --git a/drivers/iommu/iommufd/main.c b/drivers/iommu/iommufd/main.c
index 15ffda848741c9..dc19723d5832a5 100644
--- a/drivers/iommu/iommufd/main.c
+++ b/drivers/iommu/iommufd/main.c
@@ -134,6 +134,8 @@  bool iommufd_object_destroy_user(struct iommufd_ctx *ictx,
 		return false;
 	}
 	__xa_erase(&ictx->objects, obj->id);
+	if (ictx->vfio_ioas && &ictx->vfio_ioas->obj == obj)
+		ictx->vfio_ioas = NULL;
 	xa_unlock(&ictx->objects);
 	up_write(&obj->destroy_rwsem);
 
@@ -266,27 +268,31 @@  static struct iommufd_ioctl_op iommufd_ioctl_ops[] = {
 		 length),
 	IOCTL_OP(IOMMU_OPTION, iommufd_option, struct iommu_option,
 		 val64),
+	IOCTL_OP(IOMMU_VFIO_IOAS, iommufd_vfio_ioas, struct iommu_vfio_ioas,
+		 __reserved),
 };
 
 static long iommufd_fops_ioctl(struct file *filp, unsigned int cmd,
 			       unsigned long arg)
 {
+	struct iommufd_ctx *ictx = filp->private_data;
 	struct iommufd_ucmd ucmd = {};
 	struct iommufd_ioctl_op *op;
 	union ucmd_buffer buf;
 	unsigned int nr;
 	int ret;
 
-	ucmd.ictx = filp->private_data;
+	nr = _IOC_NR(cmd);
+	if (nr < IOMMUFD_CMD_BASE ||
+	    (nr - IOMMUFD_CMD_BASE) >= ARRAY_SIZE(iommufd_ioctl_ops))
+		return iommufd_vfio_ioctl(ictx, cmd, arg);
+
+	ucmd.ictx = ictx;
 	ucmd.ubuffer = (void __user *)arg;
 	ret = get_user(ucmd.user_size, (u32 __user *)ucmd.ubuffer);
 	if (ret)
 		return ret;
 
-	nr = _IOC_NR(cmd);
-	if (nr < IOMMUFD_CMD_BASE ||
-	    (nr - IOMMUFD_CMD_BASE) >= ARRAY_SIZE(iommufd_ioctl_ops))
-		return -ENOIOCTLCMD;
 	op = &iommufd_ioctl_ops[nr - IOMMUFD_CMD_BASE];
 	if (op->ioctl_num != cmd)
 		return -ENOIOCTLCMD;
diff --git a/drivers/iommu/iommufd/vfio_compat.c b/drivers/iommu/iommufd/vfio_compat.c
new file mode 100644
index 00000000000000..4566c754856aed
--- /dev/null
+++ b/drivers/iommu/iommufd/vfio_compat.c
@@ -0,0 +1,443 @@ 
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2021-2022, NVIDIA CORPORATION & AFFILIATES
+ */
+#include <linux/file.h>
+#include <linux/interval_tree.h>
+#include <linux/iommu.h>
+#include <linux/iommufd.h>
+#include <linux/slab.h>
+#include <linux/vfio.h>
+#include <uapi/linux/vfio.h>
+#include <uapi/linux/iommufd.h>
+
+#include "iommufd_private.h"
+
+static struct iommufd_ioas *get_compat_ioas(struct iommufd_ctx *ictx)
+{
+	struct iommufd_ioas *ioas = ERR_PTR(-ENODEV);
+
+	xa_lock(&ictx->objects);
+	if (!ictx->vfio_ioas || !iommufd_lock_obj(&ictx->vfio_ioas->obj))
+		goto out_unlock;
+	ioas = ictx->vfio_ioas;
+out_unlock:
+	xa_unlock(&ictx->objects);
+	return ioas;
+}
+
+/**
+ * iommufd_vfio_compat_ioas_id - Return the IOAS ID that vfio should use
+ * @ictx - Context to operate on
+ *
+ * The compatibility IOAS is the IOAS that the vfio compatibility ioctls operate
+ * on since they do not have an IOAS ID input in their ABI. Only attaching a
+ * group should cause a default creation of the internal ioas, this returns the
+ * existing ioas if it has already been assigned somehow.
+ */
+int iommufd_vfio_compat_ioas_id(struct iommufd_ctx *ictx, u32 *out_ioas_id)
+{
+	struct iommufd_ioas *ioas = NULL;
+	struct iommufd_ioas *out_ioas;
+
+	ioas = iommufd_ioas_alloc(ictx);
+	if (IS_ERR(ioas))
+		return PTR_ERR(ioas);
+
+	xa_lock(&ictx->objects);
+	if (ictx->vfio_ioas && iommufd_lock_obj(&ictx->vfio_ioas->obj))
+		out_ioas = ictx->vfio_ioas;
+	else {
+		out_ioas = ioas;
+		ictx->vfio_ioas = ioas;
+	}
+	xa_unlock(&ictx->objects);
+
+	*out_ioas_id = out_ioas->obj.id;
+	if (out_ioas != ioas) {
+		iommufd_put_object(&out_ioas->obj);
+		iommufd_object_abort(ictx, &ioas->obj);
+		return 0;
+	}
+	iommufd_object_finalize(ictx, &ioas->obj);
+	return 0;
+}
+EXPORT_SYMBOL_NS_GPL(iommufd_vfio_compat_ioas_id, IOMMUFD_VFIO);
+
+int iommufd_vfio_ioas(struct iommufd_ucmd *ucmd)
+{
+	struct iommu_vfio_ioas *cmd = ucmd->cmd;
+	struct iommufd_ioas *ioas;
+
+	if (cmd->__reserved)
+		return -EOPNOTSUPP;
+	switch (cmd->op) {
+	case IOMMU_VFIO_IOAS_GET:
+		ioas = get_compat_ioas(ucmd->ictx);
+		if (IS_ERR(ioas))
+			return PTR_ERR(ioas);
+		cmd->ioas_id = ioas->obj.id;
+		iommufd_put_object(&ioas->obj);
+		return iommufd_ucmd_respond(ucmd, sizeof(*cmd));
+
+	case IOMMU_VFIO_IOAS_SET:
+		ioas = iommufd_get_ioas(ucmd, cmd->ioas_id);
+		if (IS_ERR(ioas))
+			return PTR_ERR(ioas);
+		xa_lock(&ucmd->ictx->objects);
+		ucmd->ictx->vfio_ioas = ioas;
+		xa_unlock(&ucmd->ictx->objects);
+		iommufd_put_object(&ioas->obj);
+		return 0;
+
+	case IOMMU_VFIO_IOAS_CLEAR:
+		xa_lock(&ucmd->ictx->objects);
+		ucmd->ictx->vfio_ioas = NULL;
+		xa_unlock(&ucmd->ictx->objects);
+		return 0;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static int iommufd_vfio_map_dma(struct iommufd_ctx *ictx, unsigned int cmd,
+				void __user *arg)
+{
+	u32 supported_flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE;
+	size_t minsz = offsetofend(struct vfio_iommu_type1_dma_map, size);
+	struct vfio_iommu_type1_dma_map map;
+	int iommu_prot = IOMMU_CACHE;
+	struct iommufd_ioas *ioas;
+	unsigned long iova;
+	int rc;
+
+	if (copy_from_user(&map, arg, minsz))
+		return -EFAULT;
+
+	if (map.argsz < minsz || map.flags & ~supported_flags)
+		return -EINVAL;
+
+	if (map.flags & VFIO_DMA_MAP_FLAG_READ)
+		iommu_prot |= IOMMU_READ;
+	if (map.flags & VFIO_DMA_MAP_FLAG_WRITE)
+		iommu_prot |= IOMMU_WRITE;
+
+	ioas = get_compat_ioas(ictx);
+	if (IS_ERR(ioas))
+		return PTR_ERR(ioas);
+
+	/*
+	 * Maps created through the legacy interface always use VFIO compatible
+	 * rlimit accounting. If the user wishes to use the faster user based
+	 * rlimit accounting then they must use the new interface.
+	 */
+	iova = map.iova;
+	rc = iopt_map_user_pages(ictx, &ioas->iopt, &iova, u64_to_user_ptr(map.vaddr),
+				 map.size, iommu_prot, 0);
+	iommufd_put_object(&ioas->obj);
+	return rc;
+}
+
+static int iommufd_vfio_unmap_dma(struct iommufd_ctx *ictx, unsigned int cmd,
+				  void __user *arg)
+{
+	size_t minsz = offsetofend(struct vfio_iommu_type1_dma_unmap, size);
+	/*
+	 * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP is obsoleted by the new
+	 * dirty tracking direction:
+	 *  https://lore.kernel.org/kvm/20220731125503.142683-1-yishaih@nvidia.com/
+	 *  https://lore.kernel.org/kvm/20220428210933.3583-1-joao.m.martins@oracle.com/
+	 */
+	u32 supported_flags = VFIO_DMA_UNMAP_FLAG_ALL;
+	struct vfio_iommu_type1_dma_unmap unmap;
+	struct iommufd_ioas *ioas;
+	unsigned long unmapped;
+	int rc;
+
+	if (copy_from_user(&unmap, arg, minsz))
+		return -EFAULT;
+
+	if (unmap.argsz < minsz || unmap.flags & ~supported_flags)
+		return -EINVAL;
+
+	ioas = get_compat_ioas(ictx);
+	if (IS_ERR(ioas))
+		return PTR_ERR(ioas);
+
+	if (unmap.flags & VFIO_DMA_UNMAP_FLAG_ALL) {
+		if (unmap.iova != 0 || unmap.size != 0) {
+			rc = -EINVAL;
+			goto err_put;
+		}
+		rc = iopt_unmap_all(&ioas->iopt, &unmapped);
+	} else {
+		if (READ_ONCE(ioas->iopt.disable_large_pages)) {
+			unsigned long iovas[] = { unmap.iova + unmap.size - 1,
+						  unmap.iova - 1 };
+
+			rc = iopt_cut_iova(&ioas->iopt, iovas,
+					   unmap.iova ? 2 : 1);
+			if (rc)
+				goto err_put;
+		}
+		rc = iopt_unmap_iova(&ioas->iopt, unmap.iova, unmap.size,
+				     &unmapped);
+	}
+	unmap.size = unmapped;
+	if (copy_to_user(arg, &unmap, minsz))
+		rc = -EFAULT;
+
+err_put:
+	iommufd_put_object(&ioas->obj);
+	return rc;
+}
+
+static int iommufd_vfio_cc_iommu(struct iommufd_ctx *ictx)
+{
+	struct iommufd_hw_pagetable *hwpt;
+	struct iommufd_ioas *ioas;
+	int rc = 1;
+
+	ioas = get_compat_ioas(ictx);
+	if (IS_ERR(ioas))
+		return PTR_ERR(ioas);
+
+	mutex_lock(&ioas->mutex);
+	list_for_each_entry(hwpt, &ioas->hwpt_list, hwpt_item) {
+		if (!hwpt->enforce_cache_coherency) {
+			rc = 0;
+			break;
+		}
+	}
+	mutex_unlock(&ioas->mutex);
+
+	iommufd_put_object(&ioas->obj);
+	return rc;
+}
+
+static int iommufd_vfio_check_extension(struct iommufd_ctx *ictx,
+					unsigned long type)
+{
+	switch (type) {
+	case VFIO_TYPE1_IOMMU:
+	case VFIO_TYPE1v2_IOMMU:
+	case VFIO_UNMAP_ALL:
+		return 1;
+
+	case VFIO_DMA_CC_IOMMU:
+		return iommufd_vfio_cc_iommu(ictx);
+
+	/*
+	 * This is obsolete, and to be removed from VFIO. It was an incomplete
+	 * idea that got merged.
+	 * https://lore.kernel.org/kvm/0-v1-0093c9b0e345+19-vfio_no_nesting_jgg@nvidia.com/
+	 */
+	case VFIO_TYPE1_NESTING_IOMMU:
+		return 0;
+
+	/*
+	 * VFIO_DMA_MAP_FLAG_VADDR
+	 * https://lore.kernel.org/kvm/1611939252-7240-1-git-send-email-steven.sistare@oracle.com/
+	 * https://lore.kernel.org/all/Yz777bJZjTyLrHEQ@nvidia.com/
+	 *
+	 * It is hard to see how this could be implemented safely.
+	 */
+	case VFIO_UPDATE_VADDR:
+	default:
+		return 0;
+	}
+}
+
+static int iommufd_vfio_set_iommu(struct iommufd_ctx *ictx, unsigned long type)
+{
+	struct iommufd_ioas *ioas = NULL;
+	int rc = 0;
+
+	if (type != VFIO_TYPE1_IOMMU && type != VFIO_TYPE1v2_IOMMU)
+		return -EINVAL;
+
+	/* VFIO fails the set_iommu if there is no group */
+	ioas = get_compat_ioas(ictx);
+	if (IS_ERR(ioas))
+		return PTR_ERR(ioas);
+	if (type == VFIO_TYPE1_IOMMU)
+		rc = iopt_disable_large_pages(&ioas->iopt);
+	iommufd_put_object(&ioas->obj);
+	return rc;
+}
+
+static unsigned long iommufd_get_pagesizes(struct iommufd_ioas *ioas)
+{
+	struct io_pagetable *iopt = &ioas->iopt;
+	unsigned long pgsize_bitmap = ULONG_MAX;
+	struct iommu_domain *domain;
+	unsigned long index;
+
+	down_read(&iopt->domains_rwsem);
+	xa_for_each(&iopt->domains, index, domain)
+		pgsize_bitmap &= domain->pgsize_bitmap;
+
+	/* See vfio_update_pgsize_bitmap() */
+        if (pgsize_bitmap & ~PAGE_MASK) {
+                pgsize_bitmap &= PAGE_MASK;
+                pgsize_bitmap |= PAGE_SIZE;
+        }
+	pgsize_bitmap = max(pgsize_bitmap, ioas->iopt.iova_alignment);
+	up_read(&iopt->domains_rwsem);
+	return pgsize_bitmap;
+}
+
+static int iommufd_fill_cap_iova(struct iommufd_ioas *ioas,
+				 struct vfio_info_cap_header __user *cur,
+				 size_t avail)
+{
+	struct vfio_iommu_type1_info_cap_iova_range __user *ucap_iovas =
+		container_of(cur,
+			     struct vfio_iommu_type1_info_cap_iova_range __user,
+			     header);
+	struct vfio_iommu_type1_info_cap_iova_range cap_iovas = {
+		.header = {
+			.id = VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE,
+			.version = 1,
+		},
+	};
+	struct interval_tree_span_iter span;
+
+	interval_tree_for_each_span(&span, &ioas->iopt.reserved_itree, 0,
+				    ULONG_MAX) {
+		struct vfio_iova_range range;
+
+		if (!span.is_hole)
+			continue;
+		range.start = span.start_hole;
+		range.end = span.last_hole;
+		if (avail >= struct_size(&cap_iovas, iova_ranges,
+					 cap_iovas.nr_iovas + 1) &&
+		    copy_to_user(&ucap_iovas->iova_ranges[cap_iovas.nr_iovas],
+				 &range, sizeof(range)))
+			return -EFAULT;
+		cap_iovas.nr_iovas++;
+	}
+	if (avail >= struct_size(&cap_iovas, iova_ranges, cap_iovas.nr_iovas) &&
+	    copy_to_user(ucap_iovas, &cap_iovas, sizeof(cap_iovas)))
+		return -EFAULT;
+	return struct_size(&cap_iovas, iova_ranges, cap_iovas.nr_iovas);
+}
+
+static int iommufd_fill_cap_dma_avail(struct iommufd_ioas *ioas,
+				      struct vfio_info_cap_header __user *cur,
+				      size_t avail)
+{
+	struct vfio_iommu_type1_info_dma_avail cap_dma = {
+		.header = {
+			.id = VFIO_IOMMU_TYPE1_INFO_DMA_AVAIL,
+			.version = 1,
+		},
+		/* iommufd has no limit, return the same value as VFIO. */
+		.avail = U16_MAX,
+	};
+
+	if (avail >= sizeof(cap_dma) &&
+	    copy_to_user(cur, &cap_dma, sizeof(cap_dma)))
+		return -EFAULT;
+	return sizeof(cap_dma);
+}
+
+static int iommufd_vfio_iommu_get_info(struct iommufd_ctx *ictx,
+				       void __user *arg)
+{
+	typedef int (*fill_cap_fn)(struct iommufd_ioas *ioas,
+				   struct vfio_info_cap_header __user *cur,
+				   size_t avail);
+	static const fill_cap_fn fill_fns[] = {
+		iommufd_fill_cap_iova,
+		iommufd_fill_cap_dma_avail,
+	};
+	size_t minsz = offsetofend(struct vfio_iommu_type1_info, iova_pgsizes);
+	struct vfio_info_cap_header __user *last_cap = NULL;
+	struct vfio_iommu_type1_info info;
+	struct iommufd_ioas *ioas;
+	size_t total_cap_size;
+	int rc;
+	int i;
+
+	if (copy_from_user(&info, arg, minsz))
+		return -EFAULT;
+
+	if (info.argsz < minsz)
+		return -EINVAL;
+	minsz = min_t(size_t, info.argsz, sizeof(info));
+
+	ioas = get_compat_ioas(ictx);
+	if (IS_ERR(ioas))
+		return PTR_ERR(ioas);
+
+	down_read(&ioas->iopt.iova_rwsem);
+	info.flags = VFIO_IOMMU_INFO_PGSIZES;
+	info.iova_pgsizes = iommufd_get_pagesizes(ioas);
+	info.cap_offset = 0;
+
+	total_cap_size = sizeof(info);
+	for (i = 0; i != ARRAY_SIZE(fill_fns); i++) {
+		int cap_size;
+
+		if (info.argsz > total_cap_size)
+			cap_size = fill_fns[i](ioas, arg + total_cap_size,
+					       info.argsz - total_cap_size);
+		else
+			cap_size = fill_fns[i](ioas, NULL, 0);
+		if (cap_size < 0) {
+			rc = cap_size;
+			goto out_put;
+		}
+		if (last_cap && info.argsz >= total_cap_size &&
+		    put_user(total_cap_size, &last_cap->next)) {
+			rc = -EFAULT;
+			goto out_put;
+		}
+		last_cap = arg + total_cap_size;
+		total_cap_size += cap_size;
+	}
+
+	/*
+	 * If the user did not provide enough space then only some caps are
+	 * returned and the argsz will be updated to the correct amount to get
+	 * all caps.
+	 */
+	if (info.argsz >= total_cap_size)
+		info.cap_offset = sizeof(info);
+	info.argsz = total_cap_size;
+	info.flags |= VFIO_IOMMU_INFO_CAPS;
+	if (copy_to_user(arg, &info, minsz))
+		rc = -EFAULT;
+	rc = 0;
+
+out_put:
+	up_read(&ioas->iopt.iova_rwsem);
+	iommufd_put_object(&ioas->obj);
+	return rc;
+}
+
+int iommufd_vfio_ioctl(struct iommufd_ctx *ictx, unsigned int cmd,
+		       unsigned long arg)
+{
+	void __user *uarg = (void __user *)arg;
+
+	switch (cmd) {
+	case VFIO_GET_API_VERSION:
+		return VFIO_API_VERSION;
+	case VFIO_SET_IOMMU:
+		return iommufd_vfio_set_iommu(ictx, arg);
+	case VFIO_CHECK_EXTENSION:
+		return iommufd_vfio_check_extension(ictx, arg);
+	case VFIO_IOMMU_GET_INFO:
+		return iommufd_vfio_iommu_get_info(ictx, uarg);
+	case VFIO_IOMMU_MAP_DMA:
+		return iommufd_vfio_map_dma(ictx, cmd, uarg);
+	case VFIO_IOMMU_UNMAP_DMA:
+		return iommufd_vfio_unmap_dma(ictx, cmd, uarg);
+	case VFIO_IOMMU_DIRTY_PAGES:
+	default:
+		return -ENOIOCTLCMD;
+	}
+	return -ENOIOCTLCMD;
+}
diff --git a/include/linux/iommufd.h b/include/linux/iommufd.h
index 0750df5a7def3e..3598292df937d5 100644
--- a/include/linux/iommufd.h
+++ b/include/linux/iommufd.h
@@ -57,6 +57,7 @@  void iommufd_access_unpin_pages(struct iommufd_access *access,
 				unsigned long iova, unsigned long length);
 int iommufd_access_rw(struct iommufd_access *access, unsigned long iova,
 		      void *data, size_t len, unsigned int flags);
+int iommufd_vfio_compat_ioas_id(struct iommufd_ctx *ictx, u32 *out_ioas_id);
 #else /* !CONFIG_IOMMUFD */
 static inline struct iommufd_ctx *iommufd_ctx_from_file(struct file *file)
 {
@@ -87,5 +88,11 @@  static inline int iommufd_access_rw(struct iommufd_access *access, unsigned long
 {
 	return -EOPNOTSUPP;
 }
+
+static inline int iommufd_vfio_compat_ioas_id(struct iommufd_ctx *ictx,
+					      u32 *out_ioas_id)
+{
+	return -EOPNOTSUPP;
+}
 #endif /* CONFIG_IOMMUFD */
 #endif
diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h
index 06608cecbe19e6..fbfc8799881a6d 100644
--- a/include/uapi/linux/iommufd.h
+++ b/include/uapi/linux/iommufd.h
@@ -44,6 +44,7 @@  enum {
 	IOMMUFD_CMD_IOAS_MAP,
 	IOMMUFD_CMD_IOAS_UNMAP,
 	IOMMUFD_CMD_OPTION,
+	IOMMUFD_CMD_VFIO_IOAS,
 };
 
 /**
@@ -291,4 +292,39 @@  struct iommu_option {
 	__aligned_u64 val64;
 };
 #define IOMMU_OPTION _IO(IOMMUFD_TYPE, IOMMUFD_CMD_OPTION)
+
+/**
+ * enum iommufd_vfio_ioas_op
+ * @IOMMU_VFIO_IOAS_GET: Get the current compatibility IOAS
+ * @IOMMU_VFIO_IOAS_SET: Change the current compatibility IOAS
+ * @IOMMU_VFIO_IOAS_CLEAR: Disable VFIO compatibility
+ */
+enum iommufd_vfio_ioas_op {
+	IOMMU_VFIO_IOAS_GET = 0,
+	IOMMU_VFIO_IOAS_SET = 1,
+	IOMMU_VFIO_IOAS_CLEAR = 2,
+};
+
+/**
+ * struct iommu_vfio_ioas - ioctl(IOMMU_VFIO_IOAS)
+ * @size: sizeof(struct iommu_vfio_ioas)
+ * @ioas_id: For IOMMU_VFIO_IOAS_SET the input IOAS ID to set
+ *           For IOMMU_VFIO_IOAS_GET will output the IOAS ID
+ * @op: One of enum iommufd_vfio_ioas_op
+ * @__reserved: Must be 0
+ *
+ * The VFIO compatibility support uses a single ioas because VFIO APIs do not
+ * support the ID field. Set or Get the IOAS that VFIO compatibility will use.
+ * When VFIO_GROUP_SET_CONTAINER is used on an iommufd it will get the
+ * compatibility ioas, either by taking what is already set, or auto creating
+ * one. From then on VFIO will continue to use that ioas and is not effected by
+ * this ioctl. SET or CLEAR does not destroy any auto-created IOAS.
+ */
+struct iommu_vfio_ioas {
+	__u32 size;
+	__u32 ioas_id;
+	__u16 op;
+	__u16 __reserved;
+};
+#define IOMMU_VFIO_IOAS _IO(IOMMUFD_TYPE, IOMMUFD_CMD_VFIO_IOAS)
 #endif