diff mbox series

[v4,04/21] IOMMU: have iommu_{,un}map() split requests into largest possible chunks

Message ID 227d0bd1-c448-6024-7b98-220271d9bf63@suse.com (mailing list archive)
State Superseded
Headers show
Series IOMMU: superpage support when not sharing pagetables | expand

Commit Message

Jan Beulich April 25, 2022, 8:33 a.m. UTC
Introduce a helper function to determine the largest possible mapping
that allows covering a request (or the next part of it that is left to
be processed).

In order to not add yet more recurring dfn_add() / mfn_add() to the two
callers of the new helper, also introduce local variables holding the
values presently operated on.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Re-base over new earlier patch.

Comments

Roger Pau Monne May 3, 2022, 12:37 p.m. UTC | #1
On Mon, Apr 25, 2022 at 10:33:32AM +0200, Jan Beulich wrote:
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -307,11 +338,10 @@ int iommu_map(struct domain *d, dfn_t df
>          if ( !d->is_shutting_down && printk_ratelimit() )
>              printk(XENLOG_ERR
>                     "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" failed: %d\n",
> -                   d->domain_id, dfn_x(dfn_add(dfn, i)),
> -                   mfn_x(mfn_add(mfn, i)), rc);
> +                   d->domain_id, dfn_x(dfn), mfn_x(mfn), rc);

Since you are already adjusting the line, I wouldn't mind if you also
switched to use %pd at once (and in the same adjustment done to
iommu_unmap).

>  
>          /* while statement to satisfy __must_check */
> -        while ( iommu_unmap(d, dfn, i, flush_flags) )
> +        while ( iommu_unmap(d, dfn0, i, flush_flags) )

To match previous behavior you likely need to use i + (1UL << order),
so pages covered by the map_page call above are also taken care in the
unmap request?

With that fixed:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

(Feel free to adjust the printks to use %pd or not, that's not a
requirement for the Rb)

Thanks, Roger.
Jan Beulich May 3, 2022, 2:44 p.m. UTC | #2
On 03.05.2022 14:37, Roger Pau Monné wrote:
> On Mon, Apr 25, 2022 at 10:33:32AM +0200, Jan Beulich wrote:
>> --- a/xen/drivers/passthrough/iommu.c
>> +++ b/xen/drivers/passthrough/iommu.c
>> @@ -307,11 +338,10 @@ int iommu_map(struct domain *d, dfn_t df
>>          if ( !d->is_shutting_down && printk_ratelimit() )
>>              printk(XENLOG_ERR
>>                     "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" failed: %d\n",
>> -                   d->domain_id, dfn_x(dfn_add(dfn, i)),
>> -                   mfn_x(mfn_add(mfn, i)), rc);
>> +                   d->domain_id, dfn_x(dfn), mfn_x(mfn), rc);
> 
> Since you are already adjusting the line, I wouldn't mind if you also
> switched to use %pd at once (and in the same adjustment done to
> iommu_unmap).

I did consider doing so, but decided against since this would lead
to also touching the format string (which right now is unaltered).

>>  
>>          /* while statement to satisfy __must_check */
>> -        while ( iommu_unmap(d, dfn, i, flush_flags) )
>> +        while ( iommu_unmap(d, dfn0, i, flush_flags) )
> 
> To match previous behavior you likely need to use i + (1UL << order),
> so pages covered by the map_page call above are also taken care in the
> unmap request?

I'm afraid I don't follow: Prior behavior was to unmap only what
was mapped on earlier iterations. This continues to be that way.

> With that fixed:
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, but I'll wait with applying this.

Jan
Roger Pau Monne May 4, 2022, 10:20 a.m. UTC | #3
On Tue, May 03, 2022 at 04:44:45PM +0200, Jan Beulich wrote:
> On 03.05.2022 14:37, Roger Pau Monné wrote:
> > On Mon, Apr 25, 2022 at 10:33:32AM +0200, Jan Beulich wrote:
> >> --- a/xen/drivers/passthrough/iommu.c
> >> +++ b/xen/drivers/passthrough/iommu.c
> >> @@ -307,11 +338,10 @@ int iommu_map(struct domain *d, dfn_t df
> >>          if ( !d->is_shutting_down && printk_ratelimit() )
> >>              printk(XENLOG_ERR
> >>                     "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" failed: %d\n",
> >> -                   d->domain_id, dfn_x(dfn_add(dfn, i)),
> >> -                   mfn_x(mfn_add(mfn, i)), rc);
> >> +                   d->domain_id, dfn_x(dfn), mfn_x(mfn), rc);
> > 
> > Since you are already adjusting the line, I wouldn't mind if you also
> > switched to use %pd at once (and in the same adjustment done to
> > iommu_unmap).
> 
> I did consider doing so, but decided against since this would lead
> to also touching the format string (which right now is unaltered).
> 
> >>  
> >>          /* while statement to satisfy __must_check */
> >> -        while ( iommu_unmap(d, dfn, i, flush_flags) )
> >> +        while ( iommu_unmap(d, dfn0, i, flush_flags) )
> > 
> > To match previous behavior you likely need to use i + (1UL << order),
> > so pages covered by the map_page call above are also taken care in the
> > unmap request?
> 
> I'm afraid I don't follow: Prior behavior was to unmap only what
> was mapped on earlier iterations. This continues to be that way.

My bad, I was wrong and somehow assumed that the previous behavior
would also pass the failed map entry, but that's not the case.

> > With that fixed:
> > 
> > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Thanks, but I'll wait with applying this.

I withdraw my previous comment, feel free to apply this.

Thanks, Roger.
diff mbox series

Patch

--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -283,12 +283,38 @@  void iommu_domain_destroy(struct domain
     arch_iommu_domain_destroy(d);
 }
 
-int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
+static unsigned int mapping_order(const struct domain_iommu *hd,
+                                  dfn_t dfn, mfn_t mfn, unsigned long nr)
+{
+    unsigned long res = dfn_x(dfn) | mfn_x(mfn);
+    unsigned long sizes = hd->platform_ops->page_sizes;
+    unsigned int bit = find_first_set_bit(sizes), order = 0;
+
+    ASSERT(bit == PAGE_SHIFT);
+
+    while ( (sizes = (sizes >> bit) & ~1) )
+    {
+        unsigned long mask;
+
+        bit = find_first_set_bit(sizes);
+        mask = (1UL << bit) - 1;
+        if ( nr <= mask || (res & mask) )
+            break;
+        order += bit;
+        nr >>= bit;
+        res >>= bit;
+    }
+
+    return order;
+}
+
+int iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
               unsigned long page_count, unsigned int flags,
               unsigned int *flush_flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     unsigned long i;
+    unsigned int order;
     int rc = 0;
 
     if ( !is_iommu_enabled(d) )
@@ -296,10 +322,15 @@  int iommu_map(struct domain *d, dfn_t df
 
     ASSERT(!IOMMUF_order(flags));
 
-    for ( i = 0; i < page_count; i++ )
+    for ( i = 0; i < page_count; i += 1UL << order )
     {
-        rc = iommu_call(hd->platform_ops, map_page, d, dfn_add(dfn, i),
-                        mfn_add(mfn, i), flags, flush_flags);
+        dfn_t dfn = dfn_add(dfn0, i);
+        mfn_t mfn = mfn_add(mfn0, i);
+
+        order = mapping_order(hd, dfn, mfn, page_count - i);
+
+        rc = iommu_call(hd->platform_ops, map_page, d, dfn, mfn,
+                        flags | IOMMUF_order(order), flush_flags);
 
         if ( likely(!rc) )
             continue;
@@ -307,11 +338,10 @@  int iommu_map(struct domain *d, dfn_t df
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
                    "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" failed: %d\n",
-                   d->domain_id, dfn_x(dfn_add(dfn, i)),
-                   mfn_x(mfn_add(mfn, i)), rc);
+                   d->domain_id, dfn_x(dfn), mfn_x(mfn), rc);
 
         /* while statement to satisfy __must_check */
-        while ( iommu_unmap(d, dfn, i, flush_flags) )
+        while ( iommu_unmap(d, dfn0, i, flush_flags) )
             break;
 
         if ( !is_hardware_domain(d) )
@@ -343,20 +373,25 @@  int iommu_legacy_map(struct domain *d, d
     return rc;
 }
 
-int iommu_unmap(struct domain *d, dfn_t dfn, unsigned long page_count,
+int iommu_unmap(struct domain *d, dfn_t dfn0, unsigned long page_count,
                 unsigned int *flush_flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     unsigned long i;
+    unsigned int order;
     int rc = 0;
 
     if ( !is_iommu_enabled(d) )
         return 0;
 
-    for ( i = 0; i < page_count; i++ )
+    for ( i = 0; i < page_count; i += 1UL << order )
     {
-        int err = iommu_call(hd->platform_ops, unmap_page, d, dfn_add(dfn, i),
-                             0, flush_flags);
+        dfn_t dfn = dfn_add(dfn0, i);
+        int err;
+
+        order = mapping_order(hd, dfn, _mfn(0), page_count - i);
+        err = iommu_call(hd->platform_ops, unmap_page, d, dfn,
+                         order, flush_flags);
 
         if ( likely(!err) )
             continue;
@@ -364,7 +399,7 @@  int iommu_unmap(struct domain *d, dfn_t
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
                    "d%d: IOMMU unmapping dfn %"PRI_dfn" failed: %d\n",
-                   d->domain_id, dfn_x(dfn_add(dfn, i)), err);
+                   d->domain_id, dfn_x(dfn), err);
 
         if ( !rc )
             rc = err;