diff mbox series

[L1TF,MDS,GT,v2,1/2] common/grant_table: harden bound accesses

Message ID 1562763277-11674-2-git-send-email-nmanthey@amazon.de (mailing list archive)
State New, archived
Headers show
Series [L1TF,MDS,GT,v2,1/2] common/grant_table: harden bound accesses | expand

Commit Message

Norbert Manthey July 10, 2019, 12:54 p.m. UTC
Guests can issue grant table operations and provide guest controlled
data to them. This data is used as index for memory loads after bound
checks have been done. To avoid speculative out-of-bound accesses, we
use the array_index_nospec macro where applicable, or the macro
block_speculation. Note, the block_speculation macro is used on all
path in shared_entry_header and nr_grant_entries. This way, after a
call to such a function, all bound checks that happened before become
architectural visible, so that no additional protection is required
for corresponding array accesses. As the way we introduce an lfence
instruction might allow the compiler to reload certain values from
memory multiple times, we try to avoid speculatively continuing
execution with stale register data by moving relevant data into
function local variables.

Speculative execution is not blocked in case one of the following
properties is true:
 - path cannot be triggered by the guest
 - path does not return to the guest
 - path does not result in an out-of-bound access
 - path cannot be executed repeatedly
Only the combination of the above properties allows to actually leak
continuous chunks of memory. Therefore, we only add the penalty of
protective mechanisms in case a potential speculative out-of-bound
access matches all the above properties.

This commit addresses only out-of-bound accesses whose index is
directly controlled by the guest, and the index is checked before.
Potential out-of-bound accesses that are caused by speculatively
evaluating the version of the current table are not addressed in this
commit. Hence, speculative out-of-bound accesses might still be
possible, for example in gnttab_get_status_frame_mfn, when calling
gnttab_grow_table, the assertion that the grant table version equals
two might not hold under speculation.

This is part of the speculative hardening effort.

Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
---

Notes:
  v2:  Mention version based blocking for upcoming commit
       Introduce local variable as op->ref replacement
       Use array_nospec_index in gnttab_get_status_frame_mfn

 xen/common/grant_table.c | 80 +++++++++++++++++++++++++++++++++---------------
 1 file changed, 55 insertions(+), 25 deletions(-)

Comments

Jan Beulich July 11, 2019, 12:34 p.m. UTC | #1
On 10.07.2019 14:54, Norbert Manthey wrote:
> Guests can issue grant table operations and provide guest controlled
> data to them. This data is used as index for memory loads after bound
> checks have been done. To avoid speculative out-of-bound accesses, we
> use the array_index_nospec macro where applicable, or the macro
> block_speculation. Note, the block_speculation macro is used on all
> path in shared_entry_header and nr_grant_entries. This way, after a
> call to such a function, all bound checks that happened before become
> architectural visible, so that no additional protection is required
> for corresponding array accesses. As the way we introduce an lfence
> instruction might allow the compiler to reload certain values from
> memory multiple times, we try to avoid speculatively continuing
> execution with stale register data by moving relevant data into
> function local variables.
> 
> Speculative execution is not blocked in case one of the following
> properties is true:
>   - path cannot be triggered by the guest
>   - path does not return to the guest
>   - path does not result in an out-of-bound access
>   - path cannot be executed repeatedly

Upon re-reading I think this last item isn't fully applicable: I think
you attach such an attribute to domain creation/destruction functions.
Those aren't executed repeatedly for a single guest, but e.g. rapid
rebooting of a guest (or several ones) may actually be able to train
such paths.

> @@ -2091,6 +2100,7 @@ gnttab_transfer(
>       struct page_info *page;
>       int i;
>       struct gnttab_transfer gop;
> +    grant_ref_t ref;

This declaration would better live in the more narrow scope it's
(only) used in.

> @@ -2237,8 +2247,14 @@ gnttab_transfer(
>            */
>           spin_unlock(&e->page_alloc_lock);
>           okay = gnttab_prepare_for_transfer(e, d, gop.ref);
> +        ref = gop.ref;

Other than in the earlier cases here you copy a variable that's
already local to the function. Is this really helpful?

Independent of this - is there a particular reason you latch the
value into the (second) local variable only after its first use? It
likely won't matter much, but it's a little puzzling nevertheless.

> -        if ( unlikely(!okay || assign_pages(e, page, 0, MEMF_no_refcount)) )
> +        /*
> +         * Make sure the reference bound check in gnttab_prepare_for_transfer
> +         * is respected and speculative execution is blocked accordingly
> +         */
> +        if ( unlikely(!evaluate_nospec(okay)) ||
> +            unlikely(assign_pages(e, page, 0, MEMF_no_refcount)) )

If I can trust my mail UI (which I'm not sure I can) there's an
indentation issue here.

> @@ -3853,7 +3879,8 @@ static int gnttab_get_status_frame_mfn(struct domain *d,
>               return -EINVAL;
>       }
>   
> -    *mfn = _mfn(virt_to_mfn(gt->status[idx]));
> +    /* Make sure idx is bounded wrt nr_status_frames */
> +    *mfn = _mfn(virt_to_mfn(gt->status[array_index_nospec(idx, nr_status_frames(gt))]));

This and ...

> @@ -3882,7 +3909,8 @@ static int gnttab_get_shared_frame_mfn(struct domain *d,
>               return -EINVAL;
>       }
>   
> -    *mfn = _mfn(virt_to_mfn(gt->shared_raw[idx]));
> +    /* Make sure idx is bounded wrt nr_status_frames */
> +    *mfn = _mfn(virt_to_mfn(gt->shared_raw[array_index_nospec(idx, nr_grant_frames(gt))]));

... this line are too long now and hence need wrapping.

Jan
Norbert Manthey July 12, 2019, 8:40 a.m. UTC | #2
On 7/11/19 14:34, Jan Beulich wrote:
> On 10.07.2019 14:54, Norbert Manthey wrote:
>> Guests can issue grant table operations and provide guest controlled
>> data to them. This data is used as index for memory loads after bound
>> checks have been done. To avoid speculative out-of-bound accesses, we
>> use the array_index_nospec macro where applicable, or the macro
>> block_speculation. Note, the block_speculation macro is used on all
>> path in shared_entry_header and nr_grant_entries. This way, after a
>> call to such a function, all bound checks that happened before become
>> architectural visible, so that no additional protection is required
>> for corresponding array accesses. As the way we introduce an lfence
>> instruction might allow the compiler to reload certain values from
>> memory multiple times, we try to avoid speculatively continuing
>> execution with stale register data by moving relevant data into
>> function local variables.
>>
>> Speculative execution is not blocked in case one of the following
>> properties is true:
>>   - path cannot be triggered by the guest
>>   - path does not return to the guest
>>   - path does not result in an out-of-bound access
>>   - path cannot be executed repeatedly
> Upon re-reading I think this last item isn't fully applicable: I think
> you attach such an attribute to domain creation/destruction functions.
> Those aren't executed repeatedly for a single guest, but e.g. rapid
> rebooting of a guest (or several ones) may actually be able to train
> such paths.
True, but a rebooted domain might come up on a different core, which
results in using different hardware structures. The "repeatedly" is open
to be interpreted, I agree. From my understanding, it belongs to the
properties to have to reliably target a speculative out-of-bound access.
>
>> @@ -2091,6 +2100,7 @@ gnttab_transfer(
>>       struct page_info *page;
>>       int i;
>>       struct gnttab_transfer gop;
>> +    grant_ref_t ref;
> This declaration would better live in the more narrow scope it's
> (only) used in.
I will move it accordingly, however, as discussed below, I'll drop this
one instead.
>
>> @@ -2237,8 +2247,14 @@ gnttab_transfer(
>>            */
>>           spin_unlock(&e->page_alloc_lock);
>>           okay = gnttab_prepare_for_transfer(e, d, gop.ref);
>> +        ref = gop.ref;
> Other than in the earlier cases here you copy a variable that's
> already local to the function. Is this really helpful?
I wanted to make the two as close as possible, I will change it back as
I did not find a difference in the binary.
>
> Independent of this - is there a particular reason you latch the
> value into the (second) local variable only after its first use? It
> likely won't matter much, but it's a little puzzling nevertheless.
I wanted to make the use of the new variable as close as possible to the
bound check and instruction blocking speculation.
>
>> -        if ( unlikely(!okay || assign_pages(e, page, 0, MEMF_no_refcount)) )
>> +        /*
>> +         * Make sure the reference bound check in gnttab_prepare_for_transfer
>> +         * is respected and speculative execution is blocked accordingly
>> +         */
>> +        if ( unlikely(!evaluate_nospec(okay)) ||
>> +            unlikely(assign_pages(e, page, 0, MEMF_no_refcount)) )
> If I can trust my mail UI (which I'm not sure I can) there's an
> indentation issue here.
It looks fine to me, but I'll double check.
>
>> @@ -3853,7 +3879,8 @@ static int gnttab_get_status_frame_mfn(struct domain *d,
>>               return -EINVAL;
>>       }
>>   
>> -    *mfn = _mfn(virt_to_mfn(gt->status[idx]));
>> +    /* Make sure idx is bounded wrt nr_status_frames */
>> +    *mfn = _mfn(virt_to_mfn(gt->status[array_index_nospec(idx, nr_status_frames(gt))]));
> This and ...
>
>> @@ -3882,7 +3909,8 @@ static int gnttab_get_shared_frame_mfn(struct domain *d,
>>               return -EINVAL;
>>       }
>>   
>> -    *mfn = _mfn(virt_to_mfn(gt->shared_raw[idx]));
>> +    /* Make sure idx is bounded wrt nr_status_frames */
>> +    *mfn = _mfn(virt_to_mfn(gt->shared_raw[array_index_nospec(idx, nr_grant_frames(gt))]));
> ... this line are too long now and hence need wrapping.

I'll wrap the lines.

Best,
Norbert





Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Ralf Herbrich
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
Jan Beulich July 15, 2019, 6:32 a.m. UTC | #3
On 12.07.2019 10:40, Norbert Manthey wrote:
> On 7/11/19 14:34, Jan Beulich wrote:
>> On 10.07.2019 14:54, Norbert Manthey wrote:
>>> Guests can issue grant table operations and provide guest controlled
>>> data to them. This data is used as index for memory loads after bound
>>> checks have been done. To avoid speculative out-of-bound accesses, we
>>> use the array_index_nospec macro where applicable, or the macro
>>> block_speculation. Note, the block_speculation macro is used on all
>>> path in shared_entry_header and nr_grant_entries. This way, after a
>>> call to such a function, all bound checks that happened before become
>>> architectural visible, so that no additional protection is required
>>> for corresponding array accesses. As the way we introduce an lfence
>>> instruction might allow the compiler to reload certain values from
>>> memory multiple times, we try to avoid speculatively continuing
>>> execution with stale register data by moving relevant data into
>>> function local variables.
>>>
>>> Speculative execution is not blocked in case one of the following
>>> properties is true:
>>>    - path cannot be triggered by the guest
>>>    - path does not return to the guest
>>>    - path does not result in an out-of-bound access
>>>    - path cannot be executed repeatedly
>> Upon re-reading I think this last item isn't fully applicable: I think
>> you attach such an attribute to domain creation/destruction functions.
>> Those aren't executed repeatedly for a single guest, but e.g. rapid
>> rebooting of a guest (or several ones) may actually be able to train
>> such paths.
> True, but a rebooted domain might come up on a different core, which
> results in using different hardware structures. The "repeatedly" is open
> to be interpreted, I agree. From my understanding, it belongs to the
> properties to have to reliably target a speculative out-of-bound access.

Looks like we're taking a somewhat different perspective here: I don't
think "reliably" is the criteria to go by, but instead it would be
"there's a reasonable chance". Plus the smaller the host, the less
possibilities there are for coming back up on a different core, yet imo
we shouldn't favor large hosts in our considerations.

Jan
diff mbox series

Patch

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -912,6 +912,7 @@  map_grant_ref(
 {
     struct domain *ld, *rd, *owner = NULL;
     struct grant_table *lgt, *rgt;
+    grant_ref_t ref;
     struct vcpu   *led;
     grant_handle_t handle;
     mfn_t mfn;
@@ -975,13 +976,15 @@  map_grant_ref(
     grant_read_lock(rgt);
 
     /* Bounds check on the grant ref */
-    if ( unlikely(op->ref >= nr_grant_entries(rgt)))
+    ref = op->ref;
+    if ( unlikely(ref >= nr_grant_entries(rgt)))
         PIN_FAIL(unlock_out, GNTST_bad_gntref, "Bad ref %#x for d%d\n",
-                 op->ref, rgt->domain->domain_id);
+                 ref, rgt->domain->domain_id);
 
-    act = active_entry_acquire(rgt, op->ref);
-    shah = shared_entry_header(rgt, op->ref);
-    status = rgt->gt_version == 1 ? &shah->flags : &status_entry(rgt, op->ref);
+    /* This call also ensures the above check cannot be passed speculatively */
+    shah = shared_entry_header(rgt, ref);
+    status = rgt->gt_version == 1 ? &shah->flags : &status_entry(rgt, ref);
+    act = active_entry_acquire(rgt, ref);
 
     /* If already pinned, check the active domid and avoid refcnt overflow. */
     if ( act->pin &&
@@ -1004,8 +1007,8 @@  map_grant_ref(
         if ( !act->pin )
         {
             unsigned long gfn = rgt->gt_version == 1 ?
-                                shared_entry_v1(rgt, op->ref).frame :
-                                shared_entry_v2(rgt, op->ref).full_page.frame;
+                                shared_entry_v1(rgt, ref).frame :
+                                shared_entry_v2(rgt, ref).full_page.frame;
 
             rc = get_paged_frame(gfn, &mfn, &pg,
                                  op->flags & GNTMAP_readonly, rd);
@@ -1018,7 +1021,7 @@  map_grant_ref(
             act->length = PAGE_SIZE;
             act->is_sub_page = false;
             act->trans_domain = rd;
-            act->trans_gref = op->ref;
+            act->trans_gref = ref;
         }
     }
 
@@ -1266,6 +1269,7 @@  unmap_common(
     domid_t          dom;
     struct domain   *ld, *rd;
     struct grant_table *lgt, *rgt;
+    grant_ref_t ref;
     struct active_grant_entry *act;
     s16              rc = 0;
     struct grant_mapping *map;
@@ -1319,6 +1323,7 @@  unmap_common(
 
     op->rd = rd;
     op->ref = map->ref;
+    ref = map->ref;
 
     /*
      * We can't assume there was no racing unmap for this maptrack entry,
@@ -1328,7 +1333,7 @@  unmap_common(
      * invalid lock.
      */
     smp_rmb();
-    if ( unlikely(op->ref >= nr_grant_entries(rgt)) )
+    if ( unlikely(ref >= nr_grant_entries(rgt)) )
     {
         gdprintk(XENLOG_WARNING, "Unstable d%d handle %#x\n",
                  rgt->domain->domain_id, op->handle);
@@ -1337,7 +1342,10 @@  unmap_common(
         goto unlock_out;
     }
 
-    act = active_entry_acquire(rgt, op->ref);
+    /* Make sure the above bound check cannot be bypassed speculatively */
+    block_speculation();
+
+    act = active_entry_acquire(rgt, ref);
 
     /*
      * Note that we (ab)use the active entry lock here to protect against
@@ -1350,7 +1358,7 @@  unmap_common(
     flags = read_atomic(&map->flags);
     smp_rmb();
     if ( unlikely(!flags) || unlikely(map->domid != dom) ||
-         unlikely(map->ref != op->ref) )
+         unlikely(map->ref != ref) )
     {
         gdprintk(XENLOG_WARNING, "Unstable handle %#x\n", op->handle);
         rc = GNTST_bad_handle;
@@ -1434,7 +1442,7 @@  unmap_common_complete(struct gnttab_unmap_common *op)
     struct page_info *pg;
     uint16_t *status;
 
-    if ( !op->done )
+    if ( evaluate_nospec(!op->done) )
     {
         /* unmap_common() didn't do anything - nothing to complete. */
         return;
@@ -2042,6 +2050,7 @@  gnttab_prepare_for_transfer(
         goto fail;
     }
 
+    /* This call also ensures the above check cannot be passed speculatively */
     raw_shah = (uint32_t *)shared_entry_header(rgt, ref);
     scombo.raw = ACCESS_ONCE(*raw_shah);
 
@@ -2091,6 +2100,7 @@  gnttab_transfer(
     struct page_info *page;
     int i;
     struct gnttab_transfer gop;
+    grant_ref_t ref;
     mfn_t mfn;
     unsigned int max_bitsize;
     struct active_grant_entry *act;
@@ -2237,8 +2247,14 @@  gnttab_transfer(
          */
         spin_unlock(&e->page_alloc_lock);
         okay = gnttab_prepare_for_transfer(e, d, gop.ref);
+        ref = gop.ref;
 
-        if ( unlikely(!okay || assign_pages(e, page, 0, MEMF_no_refcount)) )
+        /*
+         * Make sure the reference bound check in gnttab_prepare_for_transfer
+         * is respected and speculative execution is blocked accordingly
+         */
+        if ( unlikely(!evaluate_nospec(okay)) ||
+            unlikely(assign_pages(e, page, 0, MEMF_no_refcount)) )
         {
             bool drop_dom_ref;
 
@@ -2268,11 +2284,11 @@  gnttab_transfer(
 
         /* Tell the guest about its new page frame. */
         grant_read_lock(e->grant_table);
-        act = active_entry_acquire(e->grant_table, gop.ref);
+        act = active_entry_acquire(e->grant_table, ref);
 
         if ( e->grant_table->gt_version == 1 )
         {
-            grant_entry_v1_t *sha = &shared_entry_v1(e->grant_table, gop.ref);
+            grant_entry_v1_t *sha = &shared_entry_v1(e->grant_table, ref);
 
             guest_physmap_add_page(e, _gfn(sha->frame), mfn, 0);
             if ( !paging_mode_translate(e) )
@@ -2280,14 +2296,14 @@  gnttab_transfer(
         }
         else
         {
-            grant_entry_v2_t *sha = &shared_entry_v2(e->grant_table, gop.ref);
+            grant_entry_v2_t *sha = &shared_entry_v2(e->grant_table, ref);
 
             guest_physmap_add_page(e, _gfn(sha->full_page.frame), mfn, 0);
             if ( !paging_mode_translate(e) )
                 sha->full_page.frame = mfn_x(mfn);
         }
         smp_wmb();
-        shared_entry_header(e->grant_table, gop.ref)->flags |=
+        shared_entry_header(e->grant_table, ref)->flags |=
             GTF_transfer_completed;
 
         active_entry_release(act);
@@ -2426,8 +2442,10 @@  acquire_grant_for_copy(
         PIN_FAIL(gt_unlock_out, GNTST_bad_gntref,
                  "Bad grant reference %#x\n", gref);
 
-    act = active_entry_acquire(rgt, gref);
+    /* This call also ensures the above check cannot be passed speculatively */
     shah = shared_entry_header(rgt, gref);
+    act = active_entry_acquire(rgt, gref);
+
     if ( rgt->gt_version == 1 )
     {
         sha2 = NULL;
@@ -2843,6 +2861,9 @@  static int gnttab_copy_buf(const struct gnttab_copy *op,
                  op->dest.offset, dest->ptr.offset,
                  op->len, dest->len);
 
+    /* Make sure the above checks are not bypassed speculatively */
+    block_speculation();
+
     memcpy(dest->virt + op->dest.offset, src->virt + op->source.offset,
            op->len);
     gnttab_mark_dirty(dest->domain, dest->mfn);
@@ -2962,7 +2983,7 @@  gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t) uop)
     struct grant_table *gt = currd->grant_table;
     grant_entry_v1_t reserved_entries[GNTTAB_NR_RESERVED_ENTRIES];
     int res;
-    unsigned int i;
+    unsigned int i, nr_ents;
 
     if ( copy_from_guest(&op, uop, 1) )
         return -EFAULT;
@@ -2986,7 +3007,8 @@  gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t) uop)
      * are allowed to be in use (xenstore/xenconsole keeps them mapped).
      * (You need to change the version number for e.g. kexec.)
      */
-    for ( i = GNTTAB_NR_RESERVED_ENTRIES; i < nr_grant_entries(gt); i++ )
+    nr_ents = nr_grant_entries(gt);
+    for ( i = GNTTAB_NR_RESERVED_ENTRIES; i < nr_ents; i++ )
     {
         if ( read_atomic(&_active_entry(gt, i).pin) != 0 )
         {
@@ -3228,6 +3250,9 @@  swap_grant_ref(grant_ref_t ref_a, grant_ref_t ref_b)
     if ( unlikely(ref_b >= nr_grant_entries(d->grant_table)))
         PIN_FAIL(out, GNTST_bad_gntref, "Bad ref-b %#x\n", ref_b);
 
+    /* Make sure the above checks are not bypassed speculatively */
+    block_speculation();
+
     /* Swapping the same ref is a no-op. */
     if ( ref_a == ref_b )
         goto out;
@@ -3697,13 +3722,14 @@  void grant_table_warn_active_grants(struct domain *d)
     struct grant_table *gt = d->grant_table;
     struct active_grant_entry *act;
     grant_ref_t ref;
-    unsigned int nr_active = 0;
+    unsigned int nr_active = 0, nr_ents;
 
 #define WARN_GRANT_MAX 10
 
     grant_read_lock(gt);
 
-    for ( ref = 0; ref != nr_grant_entries(gt); ref++ )
+    nr_ents = nr_grant_entries(gt);
+    for ( ref = 0; ref != nr_ents; ref++ )
     {
         act = active_entry_acquire(gt, ref);
         if ( !act->pin )
@@ -3853,7 +3879,8 @@  static int gnttab_get_status_frame_mfn(struct domain *d,
             return -EINVAL;
     }
 
-    *mfn = _mfn(virt_to_mfn(gt->status[idx]));
+    /* Make sure idx is bounded wrt nr_status_frames */
+    *mfn = _mfn(virt_to_mfn(gt->status[array_index_nospec(idx, nr_status_frames(gt))]));
     return 0;
 }
 
@@ -3882,7 +3909,8 @@  static int gnttab_get_shared_frame_mfn(struct domain *d,
             return -EINVAL;
     }
 
-    *mfn = _mfn(virt_to_mfn(gt->shared_raw[idx]));
+    /* Make sure idx is bounded wrt nr_status_frames */
+    *mfn = _mfn(virt_to_mfn(gt->shared_raw[array_index_nospec(idx, nr_grant_frames(gt))]));
     return 0;
 }
 
@@ -3952,6 +3980,7 @@  static void gnttab_usage_print(struct domain *rd)
     int first = 1;
     grant_ref_t ref;
     struct grant_table *gt = rd->grant_table;
+    unsigned int nr_ents;
 
     printk("      -------- active --------       -------- shared --------\n");
     printk("[ref] localdom mfn      pin          localdom gmfn     flags\n");
@@ -3964,7 +3993,8 @@  static void gnttab_usage_print(struct domain *rd)
            nr_grant_frames(gt), gt->max_grant_frames,
            nr_maptrack_frames(gt), gt->max_maptrack_frames);
 
-    for ( ref = 0; ref != nr_grant_entries(gt); ref++ )
+    nr_ents = nr_grant_entries(gt);
+    for ( ref = 0; ref != nr_ents; ref++ )
     {
         struct active_grant_entry *act;
         struct grant_entry_header *sha;