diff mbox series

[v2,08/12] viridian: add ExProcessorMasks variants of the flush hypercalls

Message ID 20201120094900.1489-9-paul@xen.org (mailing list archive)
State Superseded
Headers show
Series viridian: add support for ExProcessorMasks | expand

Commit Message

Paul Durrant Nov. 20, 2020, 9:48 a.m. UTC
From: Paul Durrant <pdurrant@amazon.com>

The Microsoft Hypervisor TLFS specifies variants of the already implemented
HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST hypercalls that take a 'Virtual
Processor Set' as an argument rather than a simple 64-bit mask.

This patch adds a new hvcall_flush_ex() function to implement these
(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX) hypercalls. This makes use of
new helper functions, hv_vpset_nr_banks() and hv_vpset_to_vpmask(), to
determine the size of the Virtual Processor Set (so it can be copied from
guest memory) and parse it into hypercall_vpmask (respectively).

NOTE: A guest should not yet issue these hypercalls as 'ExProcessorMasks'
      support needs to be advertised via CPUID. This will be done in a
      subsequent patch.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Add helper macros to define mask and struct sizes
 - Use a union to determine the size of 'hypercall_vpset'
 - Use hweight64() in hv_vpset_nr_banks()
 - Sanity check size before hvm_copy_from_guest_phys()
---
 xen/arch/x86/hvm/viridian/viridian.c | 142 +++++++++++++++++++++++++++
 1 file changed, 142 insertions(+)

Comments

Jan Beulich Nov. 24, 2020, 4:55 p.m. UTC | #1
On 20.11.2020 10:48, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> The Microsoft Hypervisor TLFS specifies variants of the already implemented
> HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST hypercalls that take a 'Virtual
> Processor Set' as an argument rather than a simple 64-bit mask.
> 
> This patch adds a new hvcall_flush_ex() function to implement these
> (HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX) hypercalls. This makes use of
> new helper functions, hv_vpset_nr_banks() and hv_vpset_to_vpmask(), to
> determine the size of the Virtual Processor Set (so it can be copied from
> guest memory) and parse it into hypercall_vpmask (respectively).
> 
> NOTE: A guest should not yet issue these hypercalls as 'ExProcessorMasks'
>       support needs to be advertised via CPUID. This will be done in a
>       subsequent patch.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Just a couple of cosmetic remarks, apart from them this looks
good to me, albeit some of the size calculations are quite,
well, involved. I guess like with most parts if this series,
in the end Wei will need to give his ack.

> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -576,6 +576,70 @@ static unsigned int vpmask_nr(const struct hypercall_vpmask *vpmask)
>      return bitmap_weight(vpmask->mask, HVM_MAX_VCPUS);
>  }
>  
> +#define HV_VPSET_BANK_SIZE \
> +    sizeof_field(struct hv_vpset, bank_contents[0])
> +
> +#define HV_VPSET_SIZE(banks)   \
> +    (sizeof(struct hv_vpset) + (banks * HV_VPSET_BANK_SIZE))

Personally I think this would be better done using
offsetof(struct hv_vpset, bank_contents), but you're the maintainer.
However, "banks" wants parenthesizing, just in case.

> +#define HV_VPSET_MAX_BANKS \
> +    (sizeof_field(struct hv_vpset, valid_bank_mask) * 8)
> +
> +struct hypercall_vpset {
> +    union {
> +        struct hv_vpset set;
> +        uint8_t pad[HV_VPSET_SIZE(HV_VPSET_MAX_BANKS)];
> +    };
> +};

A struct with just a union as member could be expressed as a simple
union - any reason you prefer the slightly more involved variant?

> +static DEFINE_PER_CPU(struct hypercall_vpset, hypercall_vpset);
> +
> +static unsigned int hv_vpset_nr_banks(struct hv_vpset *vpset)
> +{
> +    return hweight64(vpset->valid_bank_mask);
> +}
> +
> +static uint16_t hv_vpset_to_vpmask(struct hv_vpset *set,

const?

> @@ -656,6 +720,78 @@ static int hvcall_flush(union hypercall_input *input,
>      return 0;
>  }
>  
> +static int hvcall_flush_ex(union hypercall_input *input,

const again?

> +                           union hypercall_output *output,
> +                           unsigned long input_params_gpa,
> +                           unsigned long output_params_gpa)

Mainly for cosmetic reasons and to be in sync with
hvm_copy_from_guest_phys()'s respective parameter, perhaps both
would better be paddr_t?

Jan
Paul Durrant Nov. 24, 2020, 6:40 p.m. UTC | #2
> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 24 November 2020 16:56
> To: Paul Durrant <paul@xen.org>
> Cc: Paul Durrant <pdurrant@amazon.com>; Wei Liu <wl@xen.org>; Andrew Cooper
> <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH v2 08/12] viridian: add ExProcessorMasks variants of the flush hypercalls
> 
> On 20.11.2020 10:48, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > The Microsoft Hypervisor TLFS specifies variants of the already implemented
> > HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST hypercalls that take a 'Virtual
> > Processor Set' as an argument rather than a simple 64-bit mask.
> >
> > This patch adds a new hvcall_flush_ex() function to implement these
> > (HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX) hypercalls. This makes use of
> > new helper functions, hv_vpset_nr_banks() and hv_vpset_to_vpmask(), to
> > determine the size of the Virtual Processor Set (so it can be copied from
> > guest memory) and parse it into hypercall_vpmask (respectively).
> >
> > NOTE: A guest should not yet issue these hypercalls as 'ExProcessorMasks'
> >       support needs to be advertised via CPUID. This will be done in a
> >       subsequent patch.
> >
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> 
> Just a couple of cosmetic remarks, apart from them this looks
> good to me, albeit some of the size calculations are quite,
> well, involved. I guess like with most parts if this series,
> in the end Wei will need to give his ack.
> 
> > --- a/xen/arch/x86/hvm/viridian/viridian.c
> > +++ b/xen/arch/x86/hvm/viridian/viridian.c
> > @@ -576,6 +576,70 @@ static unsigned int vpmask_nr(const struct hypercall_vpmask *vpmask)
> >      return bitmap_weight(vpmask->mask, HVM_MAX_VCPUS);
> >  }
> >
> > +#define HV_VPSET_BANK_SIZE \
> > +    sizeof_field(struct hv_vpset, bank_contents[0])
> > +
> > +#define HV_VPSET_SIZE(banks)   \
> > +    (sizeof(struct hv_vpset) + (banks * HV_VPSET_BANK_SIZE))
> 
> Personally I think this would be better done using
> offsetof(struct hv_vpset, bank_contents), but you're the maintainer.
> However, "banks" wants parenthesizing, just in case.
> 

No, I actually prefer using offsetof() and yes I do indeed need to parenthesize 'banks'.

> > +#define HV_VPSET_MAX_BANKS \
> > +    (sizeof_field(struct hv_vpset, valid_bank_mask) * 8)
> > +
> > +struct hypercall_vpset {
> > +    union {
> > +        struct hv_vpset set;
> > +        uint8_t pad[HV_VPSET_SIZE(HV_VPSET_MAX_BANKS)];
> > +    };
> > +};
> 
> A struct with just a union as member could be expressed as a simple
> union - any reason you prefer the slightly more involved variant?
> 

Not really... it's only that it was a struct in the original patch. I'll change to using a union.

> > +static DEFINE_PER_CPU(struct hypercall_vpset, hypercall_vpset);
> > +
> > +static unsigned int hv_vpset_nr_banks(struct hv_vpset *vpset)
> > +{
> > +    return hweight64(vpset->valid_bank_mask);
> > +}
> > +
> > +static uint16_t hv_vpset_to_vpmask(struct hv_vpset *set,
> 
> const?
> 

Ok.

> > @@ -656,6 +720,78 @@ static int hvcall_flush(union hypercall_input *input,
> >      return 0;
> >  }
> >
> > +static int hvcall_flush_ex(union hypercall_input *input,
> 
> const again?
> 

True, but I'll need to go back and do that for the others too.

> > +                           union hypercall_output *output,
> > +                           unsigned long input_params_gpa,
> > +                           unsigned long output_params_gpa)
> 
> Mainly for cosmetic reasons and to be in sync with
> hvm_copy_from_guest_phys()'s respective parameter, perhaps both
> would better be paddr_t?
> 

Ok. Again I'll fix the prior patches to match.

  Paul

> Jan
diff mbox series

Patch

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index d6f47b28c1e6..e736c0739da0 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -576,6 +576,70 @@  static unsigned int vpmask_nr(const struct hypercall_vpmask *vpmask)
     return bitmap_weight(vpmask->mask, HVM_MAX_VCPUS);
 }
 
+#define HV_VPSET_BANK_SIZE \
+    sizeof_field(struct hv_vpset, bank_contents[0])
+
+#define HV_VPSET_SIZE(banks)   \
+    (sizeof(struct hv_vpset) + (banks * HV_VPSET_BANK_SIZE))
+
+#define HV_VPSET_MAX_BANKS \
+    (sizeof_field(struct hv_vpset, valid_bank_mask) * 8)
+
+struct hypercall_vpset {
+    union {
+        struct hv_vpset set;
+        uint8_t pad[HV_VPSET_SIZE(HV_VPSET_MAX_BANKS)];
+    };
+};
+
+static DEFINE_PER_CPU(struct hypercall_vpset, hypercall_vpset);
+
+static unsigned int hv_vpset_nr_banks(struct hv_vpset *vpset)
+{
+    return hweight64(vpset->valid_bank_mask);
+}
+
+static uint16_t hv_vpset_to_vpmask(struct hv_vpset *set,
+                                   struct hypercall_vpmask *vpmask)
+{
+#define NR_VPS_PER_BANK (HV_VPSET_BANK_SIZE * 8)
+
+    switch ( set->format )
+    {
+    case HV_GENERIC_SET_ALL:
+        vpmask_fill(vpmask);
+        return 0;
+
+    case HV_GENERIC_SET_SPARSE_4K:
+    {
+        uint64_t bank_mask;
+        unsigned int vp, bank = 0;
+
+        vpmask_empty(vpmask);
+        for ( vp = 0, bank_mask = set->valid_bank_mask;
+              bank_mask;
+              vp += NR_VPS_PER_BANK, bank_mask >>= 1 )
+        {
+            if ( bank_mask & 1 )
+            {
+                uint64_t mask = set->bank_contents[bank];
+
+                vpmask_set(vpmask, vp, mask);
+                bank++;
+            }
+        }
+        return 0;
+    }
+
+    default:
+        break;
+    }
+
+    return -EINVAL;
+
+#undef NR_VPS_PER_BANK
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -656,6 +720,78 @@  static int hvcall_flush(union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_flush_ex(union hypercall_input *input,
+                           union hypercall_output *output,
+                           unsigned long input_params_gpa,
+                           unsigned long output_params_gpa)
+{
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
+    struct {
+        uint64_t address_space;
+        uint64_t flags;
+        struct hv_vpset set;
+    } input_params;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
+        vpmask_fill(vpmask);
+    else
+    {
+        struct hypercall_vpset *vpset = &this_cpu(hypercall_vpset);
+        struct hv_vpset *set = &vpset->set;
+        size_t size;
+        int rc;
+
+        *set = input_params.set;
+        if ( set->format == HV_GENERIC_SET_SPARSE_4K )
+        {
+            unsigned long offset = offsetof(typeof(input_params),
+                                            set.bank_contents);
+
+            size = sizeof(*set->bank_contents) * hv_vpset_nr_banks(set);
+
+            if ( offsetof(typeof(*vpset), set.bank_contents[0]) + size >
+                 sizeof(*vpset) )
+            {
+                ASSERT_UNREACHABLE();
+                return -EINVAL;
+            }
+
+            if ( hvm_copy_from_guest_phys(&set->bank_contents[0],
+                                          input_params_gpa + offset,
+                                          size) != HVMTRANS_okay)
+                return -EINVAL;
+
+            size += sizeof(*set);
+        }
+        else
+            size = sizeof(*set);
+
+        rc = hv_vpset_to_vpmask(set, vpmask);
+        if ( rc )
+            return rc;
+    }
+
+    /*
+     * A false return means that another vcpu is currently trying
+     * a similar operation, so back off.
+     */
+    if ( !paging_flush_tlb(need_flush, vpmask) )
+        return -ERESTART;
+
+    output->rep_complete = input->rep_count;
+
+    return 0;
+}
+
 static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
 {
     struct domain *currd = current->domain;
@@ -769,6 +905,12 @@  int viridian_hypercall(struct cpu_user_regs *regs)
                           output_params_gpa);
         break;
 
+    case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
+    case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+        rc = hvcall_flush_ex(&input, &output, input_params_gpa,
+                             output_params_gpa);
+        break;
+
     case HVCALL_SEND_IPI:
         rc = hvcall_ipi(&input, &output, input_params_gpa,
                         output_params_gpa);