diff mbox

[RFC,04/30] iommu/arm-smmu-v3: Add support for PCI ATS

Message ID 20170227195441.5170-5-jean-philippe.brucker@arm.com (mailing list archive)
State New, archived
Delegated to: Bjorn Helgaas
Headers show

Commit Message

Jean-Philippe Brucker Feb. 27, 2017, 7:54 p.m. UTC
PCIe devices can implement their own TLB, named Address Translation Cache
(ATC). Steps involved in the use and maintenance of such caches are:

* Device sends an Address Translation Request for a given IOVA to the
  IOMMU. If the translation succeeds, the IOMMU returns the corresponding
  physical address, which is stored in the device's ATC.

* Device can then use the physical address directly in a transaction.
  A PCIe device does so by setting the TLP AT field to 0b10 - translated.
  The SMMU might check that the device is allowed to send translated
  transactions, and let it pass through.

* When an address is unmapped, CPU sends a CMD_ATC_INV command to the
  SMMU, that is relayed to the device.

In theory, this doesn't require a lot of software intervention. The IOMMU
driver needs to enable ATS when adding a PCI device, and send an
invalidation request when unmapping. Note that this invalidation is
allowed to take up to a minute, according to the PCIe spec. In
addition, the invalidation queue on the ATC side is fairly small, 32 by
default, so we cannot keep many invalidations in flight (see ATS spec
section 3.5, Invalidate Flow Control).

Handling these constraints properly would require to postpone
invalidations, and keep the stale mappings until we're certain that all
devices forgot about them. This requires major work in the page table
managers, and is therefore not done by this patch.

  Range calculation
  -----------------

The invalidation packet itself is a bit awkward: range must be naturally
aligned, which means that the start address is a multiple of the range
size. In addition, the size must be a power of two number of 4k pages. We
have a few options to enforce this constraint:

(1) Find the smallest naturally aligned region that covers the requested
    range. This is simple to compute and only takes one ATC_INV, but it
    will spill on lots of neighbouring ATC entries.

(2) Align the start address to the region size (rounded up to a power of
    two), and send a second invalidation for the next range of the same
    size. Still not great, but reduces spilling.

(3) Cover the range exactly with the smallest number of naturally aligned
    regions. This would be interesting to implement but as for (2),
    requires multiple ATC_INV.

As I suspect ATC invalidation packets will be a very scarce resource,
we'll go with option (1) for now, and only send one big invalidation.

Note that with io-pgtable, the unmap function is called for each page, so
this doesn't matter. The problem shows up when sharing page tables with
the MMU.

  Locking
  -------

The atc_invalidate function is called from arm_smmu_unmap, with pgtbl_lock
held (hardirq-safe). When sharing page tables with the MMU, we will have a
few more call sites:

* When unbinding an address space from a device, to invalidate the whole
  address space.
* When a task bound to a device does an mlock, munmap, etc. This comes
  from an MMU notifier, with mmap_sem and pte_lock held.

Given this, all locks take on the ATC invalidation path must be hardirq-
safe.

  Timeout
  -------

Some SMMU implementations will raise a CERROR_ATC_INV_SYNC when a CMD_SYNC
fails because of an ATC invalidation. Some will just fail the CMD_SYNC.
Others might let CMD_SYNC complete and have an asynchronous IMPDEF
mechanism to record the error. When we receive a CERROR_ATC_INV_SYNC, we
could retry sending all ATC_INV since last successful CMD_SYNC. When a
CMD_SYNC fails without CERROR_ATC_INV_SYNC, we could retry sending *all*
commands since last successful CMD_SYNC. This patch doesn't properly
handle timeout, and ignores devices that don't behave. It might lead to
memory corruption.

  Optional support
  ----------------

For the moment, enable ATS whenever a device advertises it. Later, we
might want to allow users to opt-in for the whole system or individual
devices via sysfs or cmdline. Some firmware interfaces also provide a
description of ATS capabilities in the root complex, and we might want to
add a similar capability in DT. For instance, the following could be added
to bindings/pci/pci-iommu.txt, as an optional property to PCI RC:

- ats-map: describe Address Translation Service support by the root
  complex. This property is an arbitrary number of tuples of
  (rid-base,length). Any RID in this interval is allowed to issue address
  translation requests.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/arm-smmu-v3.c | 262 ++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 250 insertions(+), 12 deletions(-)

Comments

Sinan Kaya March 1, 2017, 7:24 p.m. UTC | #1
On 2/27/2017 2:54 PM, Jean-Philippe Brucker wrote:
> /* Initialise command lazily */
> +		if (!cmd.opcode)
> +			arm_smmu_atc_invalidate_to_cmd(smmu, iova, size, &cmd);
> +
> +		spin_lock(&smmu_group->devices_lock);
> +
> +		list_for_each_entry(master, &smmu_group->devices, group_head)
> +			arm_smmu_atc_invalidate_master(master, &cmd);
> +
> +		/*
> +		 * TODO: ensure we do a sync whenever we have sent ats_queue_depth
> +		 * invalidations to the same device.
> +		 */
> +		arm_smmu_cmdq_issue_cmd(smmu, &sync_cmd);
> +

It is possible to observe ATS invalidation timeout up to 90 seconds according to PCIe
spec. How does the current code deal with this?
Jean-Philippe Brucker March 2, 2017, 10:51 a.m. UTC | #2
Hi Sinan,

On 01/03/17 19:24, Sinan Kaya wrote:
> On 2/27/2017 2:54 PM, Jean-Philippe Brucker wrote:
>> /* Initialise command lazily */
>> +		if (!cmd.opcode)
>> +			arm_smmu_atc_invalidate_to_cmd(smmu, iova, size, &cmd);
>> +
>> +		spin_lock(&smmu_group->devices_lock);
>> +
>> +		list_for_each_entry(master, &smmu_group->devices, group_head)
>> +			arm_smmu_atc_invalidate_master(master, &cmd);
>> +
>> +		/*
>> +		 * TODO: ensure we do a sync whenever we have sent ats_queue_depth
>> +		 * invalidations to the same device.
>> +		 */
>> +		arm_smmu_cmdq_issue_cmd(smmu, &sync_cmd);
>> +
> 
> It is possible to observe ATS invalidation timeout up to 90 seconds according to PCIe
> spec. How does the current code deal with this?
> 

Currently we give up waiting for sync after 100us (ARM_SMMU_POLL_TIMEOUT_US).
A successful sync guarantees that all ATC invalidations since last sync
were successful, so in case of timeout we should resend all of them.
The delay itself isn't important at the moment, since we don't handle
invalidaton failure at all. It's fire and forget, and I haven't come up
with a complete solution yet.

The simplest error handling would be to retry invalidation after 90
seconds if the sync didn't complete. Then after a number of failed
attempts, maybe try to reset the device. Given that ats_invalidate is
generally called in irq-safe context, we would be blocking the CPU for
minutes at a time, which seems unwise. A proper solution would be to
postpone the unmap and return an error, although unmap errors are
usually ignored.

To avoid letting anyone remap something at that address until we're sure
the invalidation succeeded, we would need to repopulate the page tables
with the stale mapping, and add a delayed work that inspects the status
of the invalidation and tries again if it failed. If the invalidation
comes from the IOMMU core, we control the page tables and it should be
doable. If it comes from mm/ however, it's more complicated. MMU
notifiers only tell us that the mapping is going away, they don't
provide us with a way to hold on to them. Until all stale mappings have
been invalidated, we also need to hold on to the address space.

I think the problem is complex enough to deserve a series of its own,
once we confirm that it may happen in hardware and have a rough idea of
the timeout values encountered.

Thanks,
Jean-Philippe
Sinan Kaya March 2, 2017, 1:11 p.m. UTC | #3
On 2017-03-02 05:51, Jean-Philippe Brucker wrote:
> Hi Sinan,
> 
> On 01/03/17 19:24, Sinan Kaya wrote:
>> On 2/27/2017 2:54 PM, Jean-Philippe Brucker wrote:
>>> /* Initialise command lazily */
>>> +		if (!cmd.opcode)
>>> +			arm_smmu_atc_invalidate_to_cmd(smmu, iova, size, &cmd);
>>> +
>>> +		spin_lock(&smmu_group->devices_lock);
>>> +
>>> +		list_for_each_entry(master, &smmu_group->devices, group_head)
>>> +			arm_smmu_atc_invalidate_master(master, &cmd);
>>> +
>>> +		/*
>>> +		 * TODO: ensure we do a sync whenever we have sent ats_queue_depth
>>> +		 * invalidations to the same device.
>>> +		 */
>>> +		arm_smmu_cmdq_issue_cmd(smmu, &sync_cmd);
>>> +
>> 
>> It is possible to observe ATS invalidation timeout up to 90 seconds 
>> according to PCIe
>> spec. How does the current code deal with this?
>> 
> 
> Currently we give up waiting for sync after 100us 
> (ARM_SMMU_POLL_TIMEOUT_US).
> A successful sync guarantees that all ATC invalidations since last sync
> were successful, so in case of timeout we should resend all of them.
> The delay itself isn't important at the moment, since we don't handle
> invalidaton failure at all. It's fire and forget, and I haven't come up
> with a complete solution yet.
> 
> The simplest error handling would be to retry invalidation after 90
> seconds if the sync didn't complete. Then after a number of failed
> attempts, maybe try to reset the device. Given that ats_invalidate is
> generally called in irq-safe context, we would be blocking the CPU for
> minutes at a time, which seems unwise. A proper solution would be to
> postpone the unmap and return an error, although unmap errors are
> usually ignored.
> 
> To avoid letting anyone remap something at that address until we're 
> sure
> the invalidation succeeded, we would need to repopulate the page tables
> with the stale mapping, and add a delayed work that inspects the status
> of the invalidation and tries again if it failed. If the invalidation
> comes from the IOMMU core, we control the page tables and it should be
> doable. If it comes from mm/ however, it's more complicated. MMU
> notifiers only tell us that the mapping is going away, they don't
> provide us with a way to hold on to them. Until all stale mappings have
> been invalidated, we also need to hold on to the address space.
> 
> I think the problem is complex enough to deserve a series of its own,
> once we confirm that it may happen in hardware and have a rough idea of
> the timeout values encountered.

Ok, fair enough. I think arm smmuv3 driver should follow the same design 
pattern other iommu drivers are following to solve this issue as other 
drivers are already handling this.

Based on what I see, if there is a timeout happenibg; your sync 
operation will not complete (consumer! = producer) until timeout 
finishes.



> 
> Thanks,
> Jean-Philippe
Sinan Kaya March 8, 2017, 3:26 p.m. UTC | #4
On 2/27/2017 2:54 PM, Jean-Philippe Brucker wrote:
> +	ats_enabled = !arm_smmu_enable_ats(master);
> +

You should make ats_supported field in IORT table part of the decision
process for when to enable ATS.
Jean-Philippe Brucker March 21, 2017, 7:38 p.m. UTC | #5
On 08/03/17 15:26, Sinan Kaya wrote:
> On 2/27/2017 2:54 PM, Jean-Philippe Brucker wrote:
>> +	ats_enabled = !arm_smmu_enable_ats(master);
>> +
> 
> You should make ats_supported field in IORT table part of the decision
> process for when to enable ATS.
> 

Agreed. I will also draft a proposal for adding the ATS property to PCI
host controller DT bindings, it seems to be missing.

Thanks,
Jean-Philippe
Sunil Kovvuri April 3, 2017, 8:34 a.m. UTC | #6
> +static size_t arm_smmu_atc_invalidate_domain(struct arm_smmu_domain *smmu_domain,
> +                                            unsigned long iova, size_t size)
> +{
> +       unsigned long flags;
> +       struct arm_smmu_cmdq_ent cmd = {0};
> +       struct arm_smmu_group *smmu_group;
> +       struct arm_smmu_master_data *master;
> +       struct arm_smmu_device *smmu = smmu_domain->smmu;
> +       struct arm_smmu_cmdq_ent sync_cmd = {
> +               .opcode = CMDQ_OP_CMD_SYNC,
> +       };
> +
> +       spin_lock_irqsave(&smmu_domain->groups_lock, flags);
> +
> +       list_for_each_entry(smmu_group, &smmu_domain->groups, domain_head) {
> +               if (!smmu_group->ats_enabled)
> +                       continue;

If ATS is not supported, this seems to increase no of cycles spent in
pgtbl_lock.
Can we return from this API by checking 'ARM_SMMU_FEAT_ATS' in smmu->features ?

Thanks,
Sunil.
Jean-Philippe Brucker April 3, 2017, 10:14 a.m. UTC | #7
On 03/04/17 09:34, Sunil Kovvuri wrote:
>> +static size_t arm_smmu_atc_invalidate_domain(struct arm_smmu_domain *smmu_domain,
>> +                                            unsigned long iova, size_t size)
>> +{
>> +       unsigned long flags;
>> +       struct arm_smmu_cmdq_ent cmd = {0};
>> +       struct arm_smmu_group *smmu_group;
>> +       struct arm_smmu_master_data *master;
>> +       struct arm_smmu_device *smmu = smmu_domain->smmu;
>> +       struct arm_smmu_cmdq_ent sync_cmd = {
>> +               .opcode = CMDQ_OP_CMD_SYNC,
>> +       };
>> +
>> +       spin_lock_irqsave(&smmu_domain->groups_lock, flags);
>> +
>> +       list_for_each_entry(smmu_group, &smmu_domain->groups, domain_head) {
>> +               if (!smmu_group->ats_enabled)
>> +                       continue;
> 
> If ATS is not supported, this seems to increase no of cycles spent in
> pgtbl_lock.
> Can we return from this API by checking 'ARM_SMMU_FEAT_ATS' in smmu->features ?

Sure, I can add a check before taking the lock. Have you been able to
observe a significant difference in cycles between checking FEAT_ATS,
checking group->ats_enabled after taking the lock, and removing this
function call altogether?

Thanks,
Jean-Philippe
Sunil Kovvuri April 3, 2017, 11:42 a.m. UTC | #8
On Mon, Apr 3, 2017 at 3:44 PM, Jean-Philippe Brucker
<jean-philippe.brucker@arm.com> wrote:
> On 03/04/17 09:34, Sunil Kovvuri wrote:
>>> +static size_t arm_smmu_atc_invalidate_domain(struct arm_smmu_domain *smmu_domain,
>>> +                                            unsigned long iova, size_t size)
>>> +{
>>> +       unsigned long flags;
>>> +       struct arm_smmu_cmdq_ent cmd = {0};
>>> +       struct arm_smmu_group *smmu_group;
>>> +       struct arm_smmu_master_data *master;
>>> +       struct arm_smmu_device *smmu = smmu_domain->smmu;
>>> +       struct arm_smmu_cmdq_ent sync_cmd = {
>>> +               .opcode = CMDQ_OP_CMD_SYNC,
>>> +       };
>>> +
>>> +       spin_lock_irqsave(&smmu_domain->groups_lock, flags);
>>> +
>>> +       list_for_each_entry(smmu_group, &smmu_domain->groups, domain_head) {
>>> +               if (!smmu_group->ats_enabled)
>>> +                       continue;
>>
>> If ATS is not supported, this seems to increase no of cycles spent in
>> pgtbl_lock.
>> Can we return from this API by checking 'ARM_SMMU_FEAT_ATS' in smmu->features ?
>
> Sure, I can add a check before taking the lock. Have you been able to
> observe a significant difference in cycles between checking FEAT_ATS,
> checking group->ats_enabled after taking the lock, and removing this
> function call altogether?
>
> Thanks,
> Jean-Philippe

No, I haven't verified, was just making an observation.

Thanks,
Sunil.
Jean-Philippe Brucker April 3, 2017, 11:56 a.m. UTC | #9
On 03/04/17 12:42, Sunil Kovvuri wrote:
> On Mon, Apr 3, 2017 at 3:44 PM, Jean-Philippe Brucker
> <jean-philippe.brucker@arm.com> wrote:
>> On 03/04/17 09:34, Sunil Kovvuri wrote:
>>>> +static size_t arm_smmu_atc_invalidate_domain(struct arm_smmu_domain *smmu_domain,
>>>> +                                            unsigned long iova, size_t size)
>>>> +{
>>>> +       unsigned long flags;
>>>> +       struct arm_smmu_cmdq_ent cmd = {0};
>>>> +       struct arm_smmu_group *smmu_group;
>>>> +       struct arm_smmu_master_data *master;
>>>> +       struct arm_smmu_device *smmu = smmu_domain->smmu;
>>>> +       struct arm_smmu_cmdq_ent sync_cmd = {
>>>> +               .opcode = CMDQ_OP_CMD_SYNC,
>>>> +       };
>>>> +
>>>> +       spin_lock_irqsave(&smmu_domain->groups_lock, flags);
>>>> +
>>>> +       list_for_each_entry(smmu_group, &smmu_domain->groups, domain_head) {
>>>> +               if (!smmu_group->ats_enabled)
>>>> +                       continue;
>>>
>>> If ATS is not supported, this seems to increase no of cycles spent in
>>> pgtbl_lock.
>>> Can we return from this API by checking 'ARM_SMMU_FEAT_ATS' in smmu->features ?
>>
>> Sure, I can add a check before taking the lock. Have you been able to
>> observe a significant difference in cycles between checking FEAT_ATS,
>> checking group->ats_enabled after taking the lock, and removing this
>> function call altogether?
>>
>> Thanks,
>> Jean-Philippe
> 
> No, I haven't verified, was just making an observation.

Fair enough, I think avoiding the lock when ATS isn't in use makes sense.

Thanks,
Jean-Philippe
Tomasz Nowicki May 10, 2017, 12:54 p.m. UTC | #10
Hi Jean,

On 27.02.2017 20:54, Jean-Philippe Brucker wrote:
> +/*
> + * Returns -ENOSYS if ATS is not supported either by the device or by the SMMU
> + */
> +static int arm_smmu_enable_ats(struct arm_smmu_master_data *master)
> +{
> +	int ret;
> +	size_t stu;
> +	struct pci_dev *pdev;
> +	struct arm_smmu_device *smmu = master->smmu;
> +
> +	if (!(smmu->features & ARM_SMMU_FEAT_ATS) || !dev_is_pci(master->dev))
> +		return -ENOSYS;
> +
> +	pdev = to_pci_dev(master->dev);
> +
> +#ifdef CONFIG_PCI_ATS
> +	if (!pdev->ats_cap)
> +		return -ENOSYS;
> +#else
> +	return -ENOSYS;
> +#endif

Nit: This deserves to be another helper in ats.c like:

int pci_ats_supported(struct pci_dev *dev) {
         if (!pdev->ats_cap)
                 return 0;

         return 1;
}

Thanks,
Tomasz
Jean-Philippe Brucker May 10, 2017, 1:35 p.m. UTC | #11
On 10/05/17 13:54, Tomasz Nowicki wrote:
> Hi Jean,
> 
> On 27.02.2017 20:54, Jean-Philippe Brucker wrote:
>> +/*
>> + * Returns -ENOSYS if ATS is not supported either by the device or by
>> the SMMU
>> + */
>> +static int arm_smmu_enable_ats(struct arm_smmu_master_data *master)
>> +{
>> +    int ret;
>> +    size_t stu;
>> +    struct pci_dev *pdev;
>> +    struct arm_smmu_device *smmu = master->smmu;
>> +
>> +    if (!(smmu->features & ARM_SMMU_FEAT_ATS) || !dev_is_pci(master->dev))
>> +        return -ENOSYS;
>> +
>> +    pdev = to_pci_dev(master->dev);
>> +
>> +#ifdef CONFIG_PCI_ATS
>> +    if (!pdev->ats_cap)
>> +        return -ENOSYS;
>> +#else
>> +    return -ENOSYS;
>> +#endif
> 
> Nit: This deserves to be another helper in ats.c like:
> 
> int pci_ats_supported(struct pci_dev *dev) {
>         if (!pdev->ats_cap)
>                 return 0;
> 
>         return 1;
> }

Indeed, although in my next version I'll remove this check altogether.
Instead I now rely on pci_enable_ats to check for ats_cap (as discussed in
patch 3). The downside is that we can't distinguish between absence of ATS
and error in enabling ATS. So we don't print a message in the latter case
anymore, we expect device drivers to check whether ATS is enabled.

Thanks,
Jean
Leizhen (ThunderTown) May 23, 2017, 8:41 a.m. UTC | #12
On 2017/2/28 3:54, Jean-Philippe Brucker wrote:
> PCIe devices can implement their own TLB, named Address Translation Cache
> (ATC). Steps involved in the use and maintenance of such caches are:
> 
> * Device sends an Address Translation Request for a given IOVA to the
>   IOMMU. If the translation succeeds, the IOMMU returns the corresponding
>   physical address, which is stored in the device's ATC.
> 
> * Device can then use the physical address directly in a transaction.
>   A PCIe device does so by setting the TLP AT field to 0b10 - translated.
>   The SMMU might check that the device is allowed to send translated
>   transactions, and let it pass through.
> 
> * When an address is unmapped, CPU sends a CMD_ATC_INV command to the
>   SMMU, that is relayed to the device.
> 
> In theory, this doesn't require a lot of software intervention. The IOMMU
> driver needs to enable ATS when adding a PCI device, and send an
> invalidation request when unmapping. Note that this invalidation is
> allowed to take up to a minute, according to the PCIe spec. In
> addition, the invalidation queue on the ATC side is fairly small, 32 by
> default, so we cannot keep many invalidations in flight (see ATS spec
> section 3.5, Invalidate Flow Control).
> 
> Handling these constraints properly would require to postpone
> invalidations, and keep the stale mappings until we're certain that all
> devices forgot about them. This requires major work in the page table
> managers, and is therefore not done by this patch.
> 
>   Range calculation
>   -----------------
> 
> The invalidation packet itself is a bit awkward: range must be naturally
> aligned, which means that the start address is a multiple of the range
> size. In addition, the size must be a power of two number of 4k pages. We
> have a few options to enforce this constraint:
> 
> (1) Find the smallest naturally aligned region that covers the requested
>     range. This is simple to compute and only takes one ATC_INV, but it
>     will spill on lots of neighbouring ATC entries.
> 
> (2) Align the start address to the region size (rounded up to a power of
>     two), and send a second invalidation for the next range of the same
>     size. Still not great, but reduces spilling.
> 
> (3) Cover the range exactly with the smallest number of naturally aligned
>     regions. This would be interesting to implement but as for (2),
>     requires multiple ATC_INV.
> 
> As I suspect ATC invalidation packets will be a very scarce resource,
> we'll go with option (1) for now, and only send one big invalidation.
> 
> Note that with io-pgtable, the unmap function is called for each page, so
> this doesn't matter. The problem shows up when sharing page tables with
> the MMU.
Suppose this is true, I'd like to choose option (2). Because the worst cases of
both (1) and (2) will not be happened, but the code of (2) will look clearer.
And (2) is technically more acceptable.

> 
>   Locking
>   -------
> 
> The atc_invalidate function is called from arm_smmu_unmap, with pgtbl_lock
> held (hardirq-safe). When sharing page tables with the MMU, we will have a
> few more call sites:
> 
> * When unbinding an address space from a device, to invalidate the whole
>   address space.
> * When a task bound to a device does an mlock, munmap, etc. This comes
>   from an MMU notifier, with mmap_sem and pte_lock held.
> 
> Given this, all locks take on the ATC invalidation path must be hardirq-
> safe.
> 
>   Timeout
>   -------
> 
> Some SMMU implementations will raise a CERROR_ATC_INV_SYNC when a CMD_SYNC
> fails because of an ATC invalidation. Some will just fail the CMD_SYNC.
> Others might let CMD_SYNC complete and have an asynchronous IMPDEF
> mechanism to record the error. When we receive a CERROR_ATC_INV_SYNC, we
> could retry sending all ATC_INV since last successful CMD_SYNC. When a
> CMD_SYNC fails without CERROR_ATC_INV_SYNC, we could retry sending *all*
> commands since last successful CMD_SYNC. This patch doesn't properly
> handle timeout, and ignores devices that don't behave. It might lead to
> memory corruption.
> 
>   Optional support
>   ----------------
> 
> For the moment, enable ATS whenever a device advertises it. Later, we
> might want to allow users to opt-in for the whole system or individual
> devices via sysfs or cmdline. Some firmware interfaces also provide a
> description of ATS capabilities in the root complex, and we might want to
> add a similar capability in DT. For instance, the following could be added
> to bindings/pci/pci-iommu.txt, as an optional property to PCI RC:
> 
> - ats-map: describe Address Translation Service support by the root
>   complex. This property is an arbitrary number of tuples of
>   (rid-base,length). Any RID in this interval is allowed to issue address
>   translation requests.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/arm-smmu-v3.c | 262 ++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 250 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index 69d00416990d..e7b940146ae3 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -35,6 +35,7 @@
>  #include <linux/of_iommu.h>
>  #include <linux/of_platform.h>
>  #include <linux/pci.h>
> +#include <linux/pci-ats.h>
>  #include <linux/platform_device.h>
>  
>  #include <linux/amba/bus.h>
> @@ -102,6 +103,7 @@
>  #define IDR5_OAS_48_BIT			(5 << IDR5_OAS_SHIFT)
>  
>  #define ARM_SMMU_CR0			0x20
> +#define CR0_ATSCHK			(1 << 4)
>  #define CR0_CMDQEN			(1 << 3)
>  #define CR0_EVTQEN			(1 << 2)
>  #define CR0_PRIQEN			(1 << 1)
> @@ -343,6 +345,7 @@
>  #define CMDQ_ERR_CERROR_NONE_IDX	0
>  #define CMDQ_ERR_CERROR_ILL_IDX		1
>  #define CMDQ_ERR_CERROR_ABT_IDX		2
> +#define CMDQ_ERR_CERROR_ATC_INV_IDX	3
>  
>  #define CMDQ_0_OP_SHIFT			0
>  #define CMDQ_0_OP_MASK			0xffUL
> @@ -364,6 +367,15 @@
>  #define CMDQ_TLBI_1_VA_MASK		~0xfffUL
>  #define CMDQ_TLBI_1_IPA_MASK		0xfffffffff000UL
>  
> +#define CMDQ_ATC_0_SSID_SHIFT		12
> +#define CMDQ_ATC_0_SSID_MASK		0xfffffUL
> +#define CMDQ_ATC_0_SID_SHIFT		32
> +#define CMDQ_ATC_0_SID_MASK		0xffffffffUL
> +#define CMDQ_ATC_0_GLOBAL		(1UL << 9)
> +#define CMDQ_ATC_1_SIZE_SHIFT		0
> +#define CMDQ_ATC_1_SIZE_MASK		0x3fUL
> +#define CMDQ_ATC_1_ADDR_MASK		~0xfffUL
> +
>  #define CMDQ_PRI_0_SSID_SHIFT		12
>  #define CMDQ_PRI_0_SSID_MASK		0xfffffUL
>  #define CMDQ_PRI_0_SID_SHIFT		32
> @@ -417,6 +429,11 @@ module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
>  MODULE_PARM_DESC(disable_bypass,
>  	"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
>  
> +static bool disable_ats_check;
> +module_param_named(disable_ats_check, disable_ats_check, bool, S_IRUGO);
> +MODULE_PARM_DESC(disable_ats_check,
> +	"By default, the SMMU checks whether each incoming transaction marked as translated is allowed by the stream configuration. This option disables the check.");
> +
>  enum pri_resp {
>  	PRI_RESP_DENY,
>  	PRI_RESP_FAIL,
> @@ -485,6 +502,15 @@ struct arm_smmu_cmdq_ent {
>  			u64			addr;
>  		} tlbi;
>  
> +		#define CMDQ_OP_ATC_INV		0x40
> +		struct {
> +			u32			sid;
> +			u32			ssid;
> +			u64			addr;
> +			u8			size;
> +			bool			global;
> +		} atc;
> +
>  		#define CMDQ_OP_PRI_RESP	0x41
>  		struct {
>  			u32			sid;
> @@ -662,6 +688,8 @@ struct arm_smmu_group {
>  
>  	struct list_head		devices;
>  	spinlock_t			devices_lock;
> +
> +	bool				ats_enabled;
>  };
>  
>  struct arm_smmu_option_prop {
> @@ -839,6 +867,14 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>  	case CMDQ_OP_TLBI_S12_VMALL:
>  		cmd[0] |= (u64)ent->tlbi.vmid << CMDQ_TLBI_0_VMID_SHIFT;
>  		break;
> +	case CMDQ_OP_ATC_INV:
> +		cmd[0] |= ent->substream_valid ? CMDQ_0_SSV : 0;
> +		cmd[0] |= ent->atc.global ? CMDQ_ATC_0_GLOBAL : 0;
> +		cmd[0] |= ent->atc.ssid << CMDQ_ATC_0_SSID_SHIFT;
> +		cmd[0] |= (u64)ent->atc.sid << CMDQ_ATC_0_SID_SHIFT;
> +		cmd[1] |= ent->atc.size << CMDQ_ATC_1_SIZE_SHIFT;
> +		cmd[1] |= ent->atc.addr & CMDQ_ATC_1_ADDR_MASK;
> +		break;
>  	case CMDQ_OP_PRI_RESP:
>  		cmd[0] |= ent->substream_valid ? CMDQ_0_SSV : 0;
>  		cmd[0] |= ent->pri.ssid << CMDQ_PRI_0_SSID_SHIFT;
> @@ -874,6 +910,7 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
>  		[CMDQ_ERR_CERROR_NONE_IDX]	= "No error",
>  		[CMDQ_ERR_CERROR_ILL_IDX]	= "Illegal command",
>  		[CMDQ_ERR_CERROR_ABT_IDX]	= "Abort on command fetch",
> +		[CMDQ_ERR_CERROR_ATC_INV_IDX]	= "ATC invalidate timeout",
>  	};
>  
>  	int i;
> @@ -893,6 +930,13 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
>  		dev_err(smmu->dev, "retrying command fetch\n");
>  	case CMDQ_ERR_CERROR_NONE_IDX:
>  		return;
> +	case CMDQ_ERR_CERROR_ATC_INV_IDX:
> +		/*
> +		 * CMD_SYNC failed because of ATC Invalidation completion
> +		 * timeout. CONS is still pointing at the CMD_SYNC. Ensure other
> +		 * operations complete by re-submitting the CMD_SYNC, cowardly
> +		 * ignoring the ATC error.
> +		 */
>  	case CMDQ_ERR_CERROR_ILL_IDX:
>  		/* Fallthrough */
>  	default:
> @@ -1084,9 +1128,6 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
>  			 STRTAB_STE_1_S1C_CACHE_WBRA
>  			 << STRTAB_STE_1_S1COR_SHIFT |
>  			 STRTAB_STE_1_S1C_SH_ISH << STRTAB_STE_1_S1CSH_SHIFT |
> -#ifdef CONFIG_PCI_ATS
> -			 STRTAB_STE_1_EATS_TRANS << STRTAB_STE_1_EATS_SHIFT |
> -#endif
>  			 STRTAB_STE_1_STRW_NSEL1 << STRTAB_STE_1_STRW_SHIFT);
>  
>  		if (smmu->features & ARM_SMMU_FEAT_STALLS)
> @@ -1115,6 +1156,10 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
>  		val |= STRTAB_STE_0_CFG_S2_TRANS;
>  	}
>  
> +	if (IS_ENABLED(CONFIG_PCI_ATS) && !ste_live)
> +		dst[1] |= cpu_to_le64(STRTAB_STE_1_EATS_TRANS
> +				      << STRTAB_STE_1_EATS_SHIFT);
> +
>  	arm_smmu_sync_ste_for_sid(smmu, sid);
>  	dst[0] = cpu_to_le64(val);
>  	arm_smmu_sync_ste_for_sid(smmu, sid);
> @@ -1377,6 +1422,120 @@ static const struct iommu_gather_ops arm_smmu_gather_ops = {
>  	.tlb_sync	= arm_smmu_tlb_sync,
>  };
>  
> +static void arm_smmu_atc_invalidate_to_cmd(struct arm_smmu_device *smmu,
> +					   unsigned long iova, size_t size,
> +					   struct arm_smmu_cmdq_ent *cmd)
> +{
> +	size_t log2_span;
> +	size_t span_mask;
> +	size_t smmu_grain;
> +	/* ATC invalidates are always on 4096 bytes pages */
> +	size_t inval_grain_shift = 12;
> +	unsigned long iova_start, iova_end;
> +	unsigned long page_start, page_end;
> +
> +	smmu_grain	= 1ULL << __ffs(smmu->pgsize_bitmap);
> +
> +	/* In case parameters are not aligned on PAGE_SIZE */
> +	iova_start	= round_down(iova, smmu_grain);
> +	iova_end	= round_up(iova + size, smmu_grain) - 1;
> +
> +	page_start	= iova_start >> inval_grain_shift;
> +	page_end	= iova_end >> inval_grain_shift;
> +
> +	/*
> +	 * Find the smallest power of two that covers the range. Most
> +	 * significant differing bit between start and end address indicates the
> +	 * required span, ie. fls(start ^ end). For example:
> +	 *
> +	 * We want to invalidate pages [8; 11]. This is already the ideal range:
> +	 *		x = 0b1000 ^ 0b1011 = 0b11
> +	 *		span = 1 << fls(x) = 4
> +	 *
> +	 * To invalidate pages [7; 10], we need to invalidate [0; 15]:
> +	 *		x = 0b0111 ^ 0b1010 = 0b1101
> +	 *		span = 1 << fls(x) = 16
> +	 */
> +	log2_span	= fls_long(page_start ^ page_end);
> +	span_mask	= (1ULL << log2_span) - 1;
> +
> +	page_start	&= ~span_mask;
In my opinion,  below(option 2) is more readable:

end = iova + size;
size = max(size, smmu_grain);
size = roundup_pow_of_two(size);
start = iova & ~(size - 1);
if (end < (start + size))
	//all included in (start,size)
else if (!(start & ~(2 * size - 1)) 	//start aligned on (2 * size) boundary
	size <<= 1;			//double size
else
	//send two invalidate command: (start,size), (start+size,size)

> +
> +	*cmd = (struct arm_smmu_cmdq_ent) {
> +		.opcode	= CMDQ_OP_ATC_INV,
> +		.atc	= {
> +			.addr = page_start << inval_grain_shift,
> +			.size = log2_span,
> +		}
> +	};
> +}
> +
> +static int arm_smmu_atc_invalidate_master(struct arm_smmu_master_data *master,
> +					  struct arm_smmu_cmdq_ent *cmd)
> +{
> +	int i;
> +	struct iommu_fwspec *fwspec = master->dev->iommu_fwspec;
> +	struct pci_dev *pdev = to_pci_dev(master->dev);
> +
> +	if (!pdev->ats_enabled)
> +		return 0;
> +
> +	for (i = 0; i < fwspec->num_ids; i++) {
> +		cmd->atc.sid = fwspec->ids[i];
> +
> +		dev_dbg(master->smmu->dev,
> +			"ATC invalidate %#x:%#x:%#llx-%#llx, esz=%d\n",
> +			cmd->atc.sid, cmd->atc.ssid, cmd->atc.addr,
> +			cmd->atc.addr + (1 << (cmd->atc.size + 12)) - 1,
> +			cmd->atc.size);
> +
> +		arm_smmu_cmdq_issue_cmd(master->smmu, cmd);
> +	}
> +
> +	return 0;
> +}
> +
> +static size_t arm_smmu_atc_invalidate_domain(struct arm_smmu_domain *smmu_domain,
> +					     unsigned long iova, size_t size)
> +{
> +	unsigned long flags;
> +	struct arm_smmu_cmdq_ent cmd = {0};
> +	struct arm_smmu_group *smmu_group;
> +	struct arm_smmu_master_data *master;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_cmdq_ent sync_cmd = {
> +		.opcode = CMDQ_OP_CMD_SYNC,
> +	};
> +
> +	spin_lock_irqsave(&smmu_domain->groups_lock, flags);
> +
> +	list_for_each_entry(smmu_group, &smmu_domain->groups, domain_head) {
> +		if (!smmu_group->ats_enabled)
> +			continue;
> +
> +		/* Initialise command lazily */
> +		if (!cmd.opcode)
> +			arm_smmu_atc_invalidate_to_cmd(smmu, iova, size, &cmd);
> +
> +		spin_lock(&smmu_group->devices_lock);
> +
> +		list_for_each_entry(master, &smmu_group->devices, group_head)
> +			arm_smmu_atc_invalidate_master(master, &cmd);
> +
> +		/*
> +		 * TODO: ensure we do a sync whenever we have sent ats_queue_depth
> +		 * invalidations to the same device.
> +		 */
> +		arm_smmu_cmdq_issue_cmd(smmu, &sync_cmd);
> +
> +		spin_unlock(&smmu_group->devices_lock);
> +	}
> +
> +	spin_unlock_irqrestore(&smmu_domain->groups_lock, flags);
> +
> +	return size;
> +}
> +
>  /* IOMMU API */
>  static bool arm_smmu_capable(enum iommu_cap cap)
>  {
> @@ -1782,7 +1941,10 @@ arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
>  
>  	spin_lock_irqsave(&smmu_domain->pgtbl_lock, flags);
>  	ret = ops->unmap(ops, iova, size);
> +	if (ret)
> +		ret = arm_smmu_atc_invalidate_domain(smmu_domain, iova, size);
>  	spin_unlock_irqrestore(&smmu_domain->pgtbl_lock, flags);
> +
>  	return ret;
>  }
>  
> @@ -1830,11 +1992,63 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>  	return sid < limit;
>  }
>  
> +/*
> + * Returns -ENOSYS if ATS is not supported either by the device or by the SMMU
> + */
> +static int arm_smmu_enable_ats(struct arm_smmu_master_data *master)
> +{
> +	int ret;
> +	size_t stu;
> +	struct pci_dev *pdev;
> +	struct arm_smmu_device *smmu = master->smmu;
> +
> +	if (!(smmu->features & ARM_SMMU_FEAT_ATS) || !dev_is_pci(master->dev))
> +		return -ENOSYS;
> +
> +	pdev = to_pci_dev(master->dev);
> +
> +#ifdef CONFIG_PCI_ATS
> +	if (!pdev->ats_cap)
> +		return -ENOSYS;
> +#else
> +	return -ENOSYS;
> +#endif
> +
> +	/* Smallest Translation Unit: log2 of the smallest supported granule */
> +	stu = __ffs(smmu->pgsize_bitmap);
> +
> +	ret = pci_enable_ats(pdev, stu);
> +	if (ret) {
> +		dev_err(&pdev->dev, "cannot enable ATS: %d\n", ret);
> +		return ret;
> +	}
> +
> +	dev_dbg(&pdev->dev, "enabled ATS with STU = %zu\n", stu);
> +
> +	return 0;
> +}
> +
> +static void arm_smmu_disable_ats(struct arm_smmu_master_data *master)
> +{
> +	struct pci_dev *pdev;
> +
> +	if (!dev_is_pci(master->dev))
> +		return;
> +
> +	pdev = to_pci_dev(master->dev);
> +
> +	if (!pdev->ats_enabled)
> +		return;
> +
> +	pci_disable_ats(pdev);
> +}
> +
>  static struct iommu_ops arm_smmu_ops;
>  
>  static int arm_smmu_add_device(struct device *dev)
>  {
>  	int i, ret;
> +	bool ats_enabled;
>  	unsigned long flags;
>  	struct arm_smmu_device *smmu;
>  	struct arm_smmu_group *smmu_group;
> @@ -1880,19 +2094,31 @@ static int arm_smmu_add_device(struct device *dev)
>  		}
>  	}
>  
> +	ats_enabled = !arm_smmu_enable_ats(master);
> +
>  	group = iommu_group_get_for_dev(dev);
> -	if (!IS_ERR(group)) {
> -		smmu_group = to_smmu_group(group);
> +	if (IS_ERR(group)) {
> +		ret = PTR_ERR(group);
> +		goto err_disable_ats;
> +	}
>  
> -		spin_lock_irqsave(&smmu_group->devices_lock, flags);
> -		list_add(&master->group_head, &smmu_group->devices);
> -		spin_unlock_irqrestore(&smmu_group->devices_lock, flags);
> +	smmu_group = to_smmu_group(group);
>  
> -		iommu_group_put(group);
> -		iommu_device_link(&smmu->iommu, dev);
> -	}
> +	smmu_group->ats_enabled |= ats_enabled;
>  
> -	return PTR_ERR_OR_ZERO(group);
> +	spin_lock_irqsave(&smmu_group->devices_lock, flags);
> +	list_add(&master->group_head, &smmu_group->devices);
> +	spin_unlock_irqrestore(&smmu_group->devices_lock, flags);
> +
> +	iommu_group_put(group);
> +	iommu_device_link(&smmu->iommu, dev);
> +
> +	return 0;
> +
> +err_disable_ats:
> +	arm_smmu_disable_ats(master);
> +
> +	return ret;
>  }
>  
>  static void arm_smmu_remove_device(struct device *dev)
> @@ -1921,6 +2147,8 @@ static void arm_smmu_remove_device(struct device *dev)
>  		spin_unlock_irqrestore(&smmu_group->devices_lock, flags);
>  
>  		iommu_group_put(group);
> +
> +		arm_smmu_disable_ats(master);
>  	}
>  
>  	iommu_group_remove_device(dev);
> @@ -2485,6 +2713,16 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
>  		}
>  	}
>  
> +	if (smmu->features & ARM_SMMU_FEAT_ATS && !disable_ats_check) {
> +		enables |= CR0_ATSCHK;
> +		ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
> +					      ARM_SMMU_CR0ACK);
> +		if (ret) {
> +			dev_err(smmu->dev, "failed to enable ATS check\n");
> +			return ret;
> +		}
> +	}
> +
>  	ret = arm_smmu_setup_irqs(smmu);
>  	if (ret) {
>  		dev_err(smmu->dev, "failed to setup irqs\n");
>
Jean-Philippe Brucker May 23, 2017, 11:21 a.m. UTC | #13
On 23/05/17 09:41, Leizhen (ThunderTown) wrote:
> On 2017/2/28 3:54, Jean-Philippe Brucker wrote:
>> PCIe devices can implement their own TLB, named Address Translation Cache
>> (ATC). Steps involved in the use and maintenance of such caches are:
>>
>> * Device sends an Address Translation Request for a given IOVA to the
>>   IOMMU. If the translation succeeds, the IOMMU returns the corresponding
>>   physical address, which is stored in the device's ATC.
>>
>> * Device can then use the physical address directly in a transaction.
>>   A PCIe device does so by setting the TLP AT field to 0b10 - translated.
>>   The SMMU might check that the device is allowed to send translated
>>   transactions, and let it pass through.
>>
>> * When an address is unmapped, CPU sends a CMD_ATC_INV command to the
>>   SMMU, that is relayed to the device.
>>
>> In theory, this doesn't require a lot of software intervention. The IOMMU
>> driver needs to enable ATS when adding a PCI device, and send an
>> invalidation request when unmapping. Note that this invalidation is
>> allowed to take up to a minute, according to the PCIe spec. In
>> addition, the invalidation queue on the ATC side is fairly small, 32 by
>> default, so we cannot keep many invalidations in flight (see ATS spec
>> section 3.5, Invalidate Flow Control).
>>
>> Handling these constraints properly would require to postpone
>> invalidations, and keep the stale mappings until we're certain that all
>> devices forgot about them. This requires major work in the page table
>> managers, and is therefore not done by this patch.
>>
>>   Range calculation
>>   -----------------
>>
>> The invalidation packet itself is a bit awkward: range must be naturally
>> aligned, which means that the start address is a multiple of the range
>> size. In addition, the size must be a power of two number of 4k pages. We
>> have a few options to enforce this constraint:
>>
>> (1) Find the smallest naturally aligned region that covers the requested
>>     range. This is simple to compute and only takes one ATC_INV, but it
>>     will spill on lots of neighbouring ATC entries.
>>
>> (2) Align the start address to the region size (rounded up to a power of
>>     two), and send a second invalidation for the next range of the same
>>     size. Still not great, but reduces spilling.
>>
>> (3) Cover the range exactly with the smallest number of naturally aligned
>>     regions. This would be interesting to implement but as for (2),
>>     requires multiple ATC_INV.
>>
>> As I suspect ATC invalidation packets will be a very scarce resource,
>> we'll go with option (1) for now, and only send one big invalidation.
>>
>> Note that with io-pgtable, the unmap function is called for each page, so
>> this doesn't matter. The problem shows up when sharing page tables with
>> the MMU.
> Suppose this is true, I'd like to choose option (2). Because the worst cases of
> both (1) and (2) will not be happened, but the code of (2) will look clearer.
> And (2) is technically more acceptable.

I agree that (2) is a bit clearer, but the question is of performance
rather than readability. I'd like to see some benchmarks or experiment on
my own before switching to a two-invalidation system.

Intuitively one big invalidation will result in more ATC trashing and will
bring overall device performance down. But then according to the PCI spec,
ATC invalidations are grossly expensive, they have an upper bound of a
minute. I agree that this is highly improbable and might depend on the
range size, but purely from an architectural standpoint, reducing the
number of ATC invalidation requests is the priority, because this is much
worse than any performance slow-down incurred by ATC trashing. And for the
moment I can only base my decisions on the architecture.

So I'd like to keep (1) for now, and update it to (2) (or even (3)) once
we have more hardware to experiment with.

Thanks,
Jean
Roy Franz May 25, 2017, 6:27 p.m. UTC | #14
On Tue, May 23, 2017 at 4:21 AM, Jean-Philippe Brucker
<jean-philippe.brucker@arm.com> wrote:
> On 23/05/17 09:41, Leizhen (ThunderTown) wrote:
>> On 2017/2/28 3:54, Jean-Philippe Brucker wrote:
>>> PCIe devices can implement their own TLB, named Address Translation Cache
>>> (ATC). Steps involved in the use and maintenance of such caches are:
>>>
>>> * Device sends an Address Translation Request for a given IOVA to the
>>>   IOMMU. If the translation succeeds, the IOMMU returns the corresponding
>>>   physical address, which is stored in the device's ATC.
>>>
>>> * Device can then use the physical address directly in a transaction.
>>>   A PCIe device does so by setting the TLP AT field to 0b10 - translated.
>>>   The SMMU might check that the device is allowed to send translated
>>>   transactions, and let it pass through.
>>>
>>> * When an address is unmapped, CPU sends a CMD_ATC_INV command to the
>>>   SMMU, that is relayed to the device.
>>>
>>> In theory, this doesn't require a lot of software intervention. The IOMMU
>>> driver needs to enable ATS when adding a PCI device, and send an
>>> invalidation request when unmapping. Note that this invalidation is
>>> allowed to take up to a minute, according to the PCIe spec. In
>>> addition, the invalidation queue on the ATC side is fairly small, 32 by
>>> default, so we cannot keep many invalidations in flight (see ATS spec
>>> section 3.5, Invalidate Flow Control).
>>>
>>> Handling these constraints properly would require to postpone
>>> invalidations, and keep the stale mappings until we're certain that all
>>> devices forgot about them. This requires major work in the page table
>>> managers, and is therefore not done by this patch.
>>>
>>>   Range calculation
>>>   -----------------
>>>
>>> The invalidation packet itself is a bit awkward: range must be naturally
>>> aligned, which means that the start address is a multiple of the range
>>> size. In addition, the size must be a power of two number of 4k pages. We
>>> have a few options to enforce this constraint:
>>>
>>> (1) Find the smallest naturally aligned region that covers the requested
>>>     range. This is simple to compute and only takes one ATC_INV, but it
>>>     will spill on lots of neighbouring ATC entries.
>>>
>>> (2) Align the start address to the region size (rounded up to a power of
>>>     two), and send a second invalidation for the next range of the same
>>>     size. Still not great, but reduces spilling.
>>>
>>> (3) Cover the range exactly with the smallest number of naturally aligned
>>>     regions. This would be interesting to implement but as for (2),
>>>     requires multiple ATC_INV.
>>>
>>> As I suspect ATC invalidation packets will be a very scarce resource,
>>> we'll go with option (1) for now, and only send one big invalidation.
>>>
>>> Note that with io-pgtable, the unmap function is called for each page, so
>>> this doesn't matter. The problem shows up when sharing page tables with
>>> the MMU.
>> Suppose this is true, I'd like to choose option (2). Because the worst cases of
>> both (1) and (2) will not be happened, but the code of (2) will look clearer.
>> And (2) is technically more acceptable.
>
> I agree that (2) is a bit clearer, but the question is of performance
> rather than readability. I'd like to see some benchmarks or experiment on
> my own before switching to a two-invalidation system.
>
> Intuitively one big invalidation will result in more ATC trashing and will
> bring overall device performance down. But then according to the PCI spec,
> ATC invalidations are grossly expensive, they have an upper bound of a
> minute. I agree that this is highly improbable and might depend on the
> range size, but purely from an architectural standpoint, reducing the
> number of ATC invalidation requests is the priority, because this is much
> worse than any performance slow-down incurred by ATC trashing. And for the
> moment I can only base my decisions on the architecture.
>
> So I'd like to keep (1) for now, and update it to (2) (or even (3)) once
> we have more hardware to experiment with.
>
> Thanks,
> Jean
>

I think (1) is a good place to start, as the same restricted encoding
that is used in
the invalidations is also used in the translation responses - all of
the ATC entries
were created with regions described this way.  We still may end up with nothing
but STU sized ATC entries, as TAs are free to respond to large
translation requests
with multiple STU sized translations, and in some cases this is the
best that they
can do.  Picking the optimal strategy will depend on hardware, and
maybe workload
as well.

Thanks,
Roy


>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
diff mbox

Patch

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 69d00416990d..e7b940146ae3 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -35,6 +35,7 @@ 
 #include <linux/of_iommu.h>
 #include <linux/of_platform.h>
 #include <linux/pci.h>
+#include <linux/pci-ats.h>
 #include <linux/platform_device.h>
 
 #include <linux/amba/bus.h>
@@ -102,6 +103,7 @@ 
 #define IDR5_OAS_48_BIT			(5 << IDR5_OAS_SHIFT)
 
 #define ARM_SMMU_CR0			0x20
+#define CR0_ATSCHK			(1 << 4)
 #define CR0_CMDQEN			(1 << 3)
 #define CR0_EVTQEN			(1 << 2)
 #define CR0_PRIQEN			(1 << 1)
@@ -343,6 +345,7 @@ 
 #define CMDQ_ERR_CERROR_NONE_IDX	0
 #define CMDQ_ERR_CERROR_ILL_IDX		1
 #define CMDQ_ERR_CERROR_ABT_IDX		2
+#define CMDQ_ERR_CERROR_ATC_INV_IDX	3
 
 #define CMDQ_0_OP_SHIFT			0
 #define CMDQ_0_OP_MASK			0xffUL
@@ -364,6 +367,15 @@ 
 #define CMDQ_TLBI_1_VA_MASK		~0xfffUL
 #define CMDQ_TLBI_1_IPA_MASK		0xfffffffff000UL
 
+#define CMDQ_ATC_0_SSID_SHIFT		12
+#define CMDQ_ATC_0_SSID_MASK		0xfffffUL
+#define CMDQ_ATC_0_SID_SHIFT		32
+#define CMDQ_ATC_0_SID_MASK		0xffffffffUL
+#define CMDQ_ATC_0_GLOBAL		(1UL << 9)
+#define CMDQ_ATC_1_SIZE_SHIFT		0
+#define CMDQ_ATC_1_SIZE_MASK		0x3fUL
+#define CMDQ_ATC_1_ADDR_MASK		~0xfffUL
+
 #define CMDQ_PRI_0_SSID_SHIFT		12
 #define CMDQ_PRI_0_SSID_MASK		0xfffffUL
 #define CMDQ_PRI_0_SID_SHIFT		32
@@ -417,6 +429,11 @@  module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
 MODULE_PARM_DESC(disable_bypass,
 	"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
 
+static bool disable_ats_check;
+module_param_named(disable_ats_check, disable_ats_check, bool, S_IRUGO);
+MODULE_PARM_DESC(disable_ats_check,
+	"By default, the SMMU checks whether each incoming transaction marked as translated is allowed by the stream configuration. This option disables the check.");
+
 enum pri_resp {
 	PRI_RESP_DENY,
 	PRI_RESP_FAIL,
@@ -485,6 +502,15 @@  struct arm_smmu_cmdq_ent {
 			u64			addr;
 		} tlbi;
 
+		#define CMDQ_OP_ATC_INV		0x40
+		struct {
+			u32			sid;
+			u32			ssid;
+			u64			addr;
+			u8			size;
+			bool			global;
+		} atc;
+
 		#define CMDQ_OP_PRI_RESP	0x41
 		struct {
 			u32			sid;
@@ -662,6 +688,8 @@  struct arm_smmu_group {
 
 	struct list_head		devices;
 	spinlock_t			devices_lock;
+
+	bool				ats_enabled;
 };
 
 struct arm_smmu_option_prop {
@@ -839,6 +867,14 @@  static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
 	case CMDQ_OP_TLBI_S12_VMALL:
 		cmd[0] |= (u64)ent->tlbi.vmid << CMDQ_TLBI_0_VMID_SHIFT;
 		break;
+	case CMDQ_OP_ATC_INV:
+		cmd[0] |= ent->substream_valid ? CMDQ_0_SSV : 0;
+		cmd[0] |= ent->atc.global ? CMDQ_ATC_0_GLOBAL : 0;
+		cmd[0] |= ent->atc.ssid << CMDQ_ATC_0_SSID_SHIFT;
+		cmd[0] |= (u64)ent->atc.sid << CMDQ_ATC_0_SID_SHIFT;
+		cmd[1] |= ent->atc.size << CMDQ_ATC_1_SIZE_SHIFT;
+		cmd[1] |= ent->atc.addr & CMDQ_ATC_1_ADDR_MASK;
+		break;
 	case CMDQ_OP_PRI_RESP:
 		cmd[0] |= ent->substream_valid ? CMDQ_0_SSV : 0;
 		cmd[0] |= ent->pri.ssid << CMDQ_PRI_0_SSID_SHIFT;
@@ -874,6 +910,7 @@  static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
 		[CMDQ_ERR_CERROR_NONE_IDX]	= "No error",
 		[CMDQ_ERR_CERROR_ILL_IDX]	= "Illegal command",
 		[CMDQ_ERR_CERROR_ABT_IDX]	= "Abort on command fetch",
+		[CMDQ_ERR_CERROR_ATC_INV_IDX]	= "ATC invalidate timeout",
 	};
 
 	int i;
@@ -893,6 +930,13 @@  static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
 		dev_err(smmu->dev, "retrying command fetch\n");
 	case CMDQ_ERR_CERROR_NONE_IDX:
 		return;
+	case CMDQ_ERR_CERROR_ATC_INV_IDX:
+		/*
+		 * CMD_SYNC failed because of ATC Invalidation completion
+		 * timeout. CONS is still pointing at the CMD_SYNC. Ensure other
+		 * operations complete by re-submitting the CMD_SYNC, cowardly
+		 * ignoring the ATC error.
+		 */
 	case CMDQ_ERR_CERROR_ILL_IDX:
 		/* Fallthrough */
 	default:
@@ -1084,9 +1128,6 @@  static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
 			 STRTAB_STE_1_S1C_CACHE_WBRA
 			 << STRTAB_STE_1_S1COR_SHIFT |
 			 STRTAB_STE_1_S1C_SH_ISH << STRTAB_STE_1_S1CSH_SHIFT |
-#ifdef CONFIG_PCI_ATS
-			 STRTAB_STE_1_EATS_TRANS << STRTAB_STE_1_EATS_SHIFT |
-#endif
 			 STRTAB_STE_1_STRW_NSEL1 << STRTAB_STE_1_STRW_SHIFT);
 
 		if (smmu->features & ARM_SMMU_FEAT_STALLS)
@@ -1115,6 +1156,10 @@  static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
 		val |= STRTAB_STE_0_CFG_S2_TRANS;
 	}
 
+	if (IS_ENABLED(CONFIG_PCI_ATS) && !ste_live)
+		dst[1] |= cpu_to_le64(STRTAB_STE_1_EATS_TRANS
+				      << STRTAB_STE_1_EATS_SHIFT);
+
 	arm_smmu_sync_ste_for_sid(smmu, sid);
 	dst[0] = cpu_to_le64(val);
 	arm_smmu_sync_ste_for_sid(smmu, sid);
@@ -1377,6 +1422,120 @@  static const struct iommu_gather_ops arm_smmu_gather_ops = {
 	.tlb_sync	= arm_smmu_tlb_sync,
 };
 
+static void arm_smmu_atc_invalidate_to_cmd(struct arm_smmu_device *smmu,
+					   unsigned long iova, size_t size,
+					   struct arm_smmu_cmdq_ent *cmd)
+{
+	size_t log2_span;
+	size_t span_mask;
+	size_t smmu_grain;
+	/* ATC invalidates are always on 4096 bytes pages */
+	size_t inval_grain_shift = 12;
+	unsigned long iova_start, iova_end;
+	unsigned long page_start, page_end;
+
+	smmu_grain	= 1ULL << __ffs(smmu->pgsize_bitmap);
+
+	/* In case parameters are not aligned on PAGE_SIZE */
+	iova_start	= round_down(iova, smmu_grain);
+	iova_end	= round_up(iova + size, smmu_grain) - 1;
+
+	page_start	= iova_start >> inval_grain_shift;
+	page_end	= iova_end >> inval_grain_shift;
+
+	/*
+	 * Find the smallest power of two that covers the range. Most
+	 * significant differing bit between start and end address indicates the
+	 * required span, ie. fls(start ^ end). For example:
+	 *
+	 * We want to invalidate pages [8; 11]. This is already the ideal range:
+	 *		x = 0b1000 ^ 0b1011 = 0b11
+	 *		span = 1 << fls(x) = 4
+	 *
+	 * To invalidate pages [7; 10], we need to invalidate [0; 15]:
+	 *		x = 0b0111 ^ 0b1010 = 0b1101
+	 *		span = 1 << fls(x) = 16
+	 */
+	log2_span	= fls_long(page_start ^ page_end);
+	span_mask	= (1ULL << log2_span) - 1;
+
+	page_start	&= ~span_mask;
+
+	*cmd = (struct arm_smmu_cmdq_ent) {
+		.opcode	= CMDQ_OP_ATC_INV,
+		.atc	= {
+			.addr = page_start << inval_grain_shift,
+			.size = log2_span,
+		}
+	};
+}
+
+static int arm_smmu_atc_invalidate_master(struct arm_smmu_master_data *master,
+					  struct arm_smmu_cmdq_ent *cmd)
+{
+	int i;
+	struct iommu_fwspec *fwspec = master->dev->iommu_fwspec;
+	struct pci_dev *pdev = to_pci_dev(master->dev);
+
+	if (!pdev->ats_enabled)
+		return 0;
+
+	for (i = 0; i < fwspec->num_ids; i++) {
+		cmd->atc.sid = fwspec->ids[i];
+
+		dev_dbg(master->smmu->dev,
+			"ATC invalidate %#x:%#x:%#llx-%#llx, esz=%d\n",
+			cmd->atc.sid, cmd->atc.ssid, cmd->atc.addr,
+			cmd->atc.addr + (1 << (cmd->atc.size + 12)) - 1,
+			cmd->atc.size);
+
+		arm_smmu_cmdq_issue_cmd(master->smmu, cmd);
+	}
+
+	return 0;
+}
+
+static size_t arm_smmu_atc_invalidate_domain(struct arm_smmu_domain *smmu_domain,
+					     unsigned long iova, size_t size)
+{
+	unsigned long flags;
+	struct arm_smmu_cmdq_ent cmd = {0};
+	struct arm_smmu_group *smmu_group;
+	struct arm_smmu_master_data *master;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_cmdq_ent sync_cmd = {
+		.opcode = CMDQ_OP_CMD_SYNC,
+	};
+
+	spin_lock_irqsave(&smmu_domain->groups_lock, flags);
+
+	list_for_each_entry(smmu_group, &smmu_domain->groups, domain_head) {
+		if (!smmu_group->ats_enabled)
+			continue;
+
+		/* Initialise command lazily */
+		if (!cmd.opcode)
+			arm_smmu_atc_invalidate_to_cmd(smmu, iova, size, &cmd);
+
+		spin_lock(&smmu_group->devices_lock);
+
+		list_for_each_entry(master, &smmu_group->devices, group_head)
+			arm_smmu_atc_invalidate_master(master, &cmd);
+
+		/*
+		 * TODO: ensure we do a sync whenever we have sent ats_queue_depth
+		 * invalidations to the same device.
+		 */
+		arm_smmu_cmdq_issue_cmd(smmu, &sync_cmd);
+
+		spin_unlock(&smmu_group->devices_lock);
+	}
+
+	spin_unlock_irqrestore(&smmu_domain->groups_lock, flags);
+
+	return size;
+}
+
 /* IOMMU API */
 static bool arm_smmu_capable(enum iommu_cap cap)
 {
@@ -1782,7 +1941,10 @@  arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
 
 	spin_lock_irqsave(&smmu_domain->pgtbl_lock, flags);
 	ret = ops->unmap(ops, iova, size);
+	if (ret)
+		ret = arm_smmu_atc_invalidate_domain(smmu_domain, iova, size);
 	spin_unlock_irqrestore(&smmu_domain->pgtbl_lock, flags);
+
 	return ret;
 }
 
@@ -1830,11 +1992,63 @@  static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
 	return sid < limit;
 }
 
+/*
+ * Returns -ENOSYS if ATS is not supported either by the device or by the SMMU
+ */
+static int arm_smmu_enable_ats(struct arm_smmu_master_data *master)
+{
+	int ret;
+	size_t stu;
+	struct pci_dev *pdev;
+	struct arm_smmu_device *smmu = master->smmu;
+
+	if (!(smmu->features & ARM_SMMU_FEAT_ATS) || !dev_is_pci(master->dev))
+		return -ENOSYS;
+
+	pdev = to_pci_dev(master->dev);
+
+#ifdef CONFIG_PCI_ATS
+	if (!pdev->ats_cap)
+		return -ENOSYS;
+#else
+	return -ENOSYS;
+#endif
+
+	/* Smallest Translation Unit: log2 of the smallest supported granule */
+	stu = __ffs(smmu->pgsize_bitmap);
+
+	ret = pci_enable_ats(pdev, stu);
+	if (ret) {
+		dev_err(&pdev->dev, "cannot enable ATS: %d\n", ret);
+		return ret;
+	}
+
+	dev_dbg(&pdev->dev, "enabled ATS with STU = %zu\n", stu);
+
+	return 0;
+}
+
+static void arm_smmu_disable_ats(struct arm_smmu_master_data *master)
+{
+	struct pci_dev *pdev;
+
+	if (!dev_is_pci(master->dev))
+		return;
+
+	pdev = to_pci_dev(master->dev);
+
+	if (!pdev->ats_enabled)
+		return;
+
+	pci_disable_ats(pdev);
+}
+
 static struct iommu_ops arm_smmu_ops;
 
 static int arm_smmu_add_device(struct device *dev)
 {
 	int i, ret;
+	bool ats_enabled;
 	unsigned long flags;
 	struct arm_smmu_device *smmu;
 	struct arm_smmu_group *smmu_group;
@@ -1880,19 +2094,31 @@  static int arm_smmu_add_device(struct device *dev)
 		}
 	}
 
+	ats_enabled = !arm_smmu_enable_ats(master);
+
 	group = iommu_group_get_for_dev(dev);
-	if (!IS_ERR(group)) {
-		smmu_group = to_smmu_group(group);
+	if (IS_ERR(group)) {
+		ret = PTR_ERR(group);
+		goto err_disable_ats;
+	}
 
-		spin_lock_irqsave(&smmu_group->devices_lock, flags);
-		list_add(&master->group_head, &smmu_group->devices);
-		spin_unlock_irqrestore(&smmu_group->devices_lock, flags);
+	smmu_group = to_smmu_group(group);
 
-		iommu_group_put(group);
-		iommu_device_link(&smmu->iommu, dev);
-	}
+	smmu_group->ats_enabled |= ats_enabled;
 
-	return PTR_ERR_OR_ZERO(group);
+	spin_lock_irqsave(&smmu_group->devices_lock, flags);
+	list_add(&master->group_head, &smmu_group->devices);
+	spin_unlock_irqrestore(&smmu_group->devices_lock, flags);
+
+	iommu_group_put(group);
+	iommu_device_link(&smmu->iommu, dev);
+
+	return 0;
+
+err_disable_ats:
+	arm_smmu_disable_ats(master);
+
+	return ret;
 }
 
 static void arm_smmu_remove_device(struct device *dev)
@@ -1921,6 +2147,8 @@  static void arm_smmu_remove_device(struct device *dev)
 		spin_unlock_irqrestore(&smmu_group->devices_lock, flags);
 
 		iommu_group_put(group);
+
+		arm_smmu_disable_ats(master);
 	}
 
 	iommu_group_remove_device(dev);
@@ -2485,6 +2713,16 @@  static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
 		}
 	}
 
+	if (smmu->features & ARM_SMMU_FEAT_ATS && !disable_ats_check) {
+		enables |= CR0_ATSCHK;
+		ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+					      ARM_SMMU_CR0ACK);
+		if (ret) {
+			dev_err(smmu->dev, "failed to enable ATS check\n");
+			return ret;
+		}
+	}
+
 	ret = arm_smmu_setup_irqs(smmu);
 	if (ret) {
 		dev_err(smmu->dev, "failed to setup irqs\n");