diff mbox series

[for-next,RFC,4/8] x86: factor out xen variants for hypervisor setup code

Message ID 20190923100931.29670-5-liuwe@microsoft.com (mailing list archive)
State Superseded
Headers show
Series Port Xen to Hyper-V | expand

Commit Message

Wei Liu Sept. 23, 2019, 10:09 a.m. UTC
We will add Hyper-V specific implementations in the future.

No functional change.

Signed-off-by: Wei Liu <liuwe@microsoft.com>
---
 xen/arch/x86/guest/xen/xen.c | 32 ++++++++++++++++++++++++++------
 1 file changed, 26 insertions(+), 6 deletions(-)

Comments

Roger Pau Monné Sept. 25, 2019, 10:23 a.m. UTC | #1
On Mon, Sep 23, 2019 at 11:09:27AM +0100, Wei Liu wrote:
> We will add Hyper-V specific implementations in the future.
> 
> No functional change.
> 
> Signed-off-by: Wei Liu <liuwe@microsoft.com>
> ---
>  xen/arch/x86/guest/xen/xen.c | 32 ++++++++++++++++++++++++++------
>  1 file changed, 26 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c
> index 78fc603996..f93c8fbd1c 100644
> --- a/xen/arch/x86/guest/xen/xen.c
> +++ b/xen/arch/x86/guest/xen/xen.c
> @@ -67,7 +67,7 @@ static void __init find_xen_leaves(void)
>      }
>  }
>  
> -void __init probe_hypervisor(void)
> +static void __init probe_xen(void)

While here I would rename to xen_probe, to match with the other
functions (ie: xen_setup below for example).

>  {
>      if ( xen_guest )
>          return;
> @@ -87,6 +87,11 @@ void __init probe_hypervisor(void)
>      xen_guest = true;
>  }
>  
> +void __init probe_hypervisor(void)

Shouldn't this live in a separate file, like guest/guest.c or some
such?

Also it might be nice to introduce something like:

enum guest_type {
    XEN_GUEST,
} guest_type;

So that you can add a switch here, even if the only case is Xen ATM. I
guess this will be done in a later patch, or introduce an
hypervisor_ops struct that contain pointers to functions to allow for
different implementations.

> +{
> +    probe_xen();
> +}
> +
>  static void map_shared_info(void)
>  {
>      mfn_t mfn;
> @@ -249,10 +254,8 @@ static void init_evtchn(void)
>      }
>  }
>  
> -void __init hypervisor_setup(void)
> +static void __init xen_setup(void)
>  {
> -    init_memmap();
> -
>      map_shared_info();
>  
>      set_vcpu_id();
> @@ -277,13 +280,25 @@ void __init hypervisor_setup(void)
>      init_evtchn();
>  }
>  
> -void hypervisor_ap_setup(void)
> +void __init hypervisor_setup(void)
> +{
> +    init_memmap();

I wonder, do you also require to map hypervisor data into the guest
physmap when running on HyperV?

Is there a way when running on HyperV to request unused physical
address space ranges? What Xen currently does in init_memmap is quite
crappy.

Thanks, Roger.
Wei Liu Sept. 27, 2019, 11:30 a.m. UTC | #2
On Wed, Sep 25, 2019 at 12:23:54PM +0200, Roger Pau Monné wrote:
> > -void __init probe_hypervisor(void)
> > +static void __init probe_xen(void)
> 
> While here I would rename to xen_probe, to match with the other
> functions (ie: xen_setup below for example).

Sure. I can do that. I always thought that strange too.

> 
> >  {
> >      if ( xen_guest )
> >          return;
> > @@ -87,6 +87,11 @@ void __init probe_hypervisor(void)
> >      xen_guest = true;
> >  }
> >  
> > +void __init probe_hypervisor(void)
> 
> Shouldn't this live in a separate file, like guest/guest.c or some
> such?
> 

It's done in a later patch. I believe you've already seen it.

> Also it might be nice to introduce something like:
> 
> enum guest_type {
>     XEN_GUEST,
> } guest_type;
> 

This works for me.

> So that you can add a switch here, even if the only case is Xen ATM. I
> guess this will be done in a later patch, or introduce an
> hypervisor_ops struct that contain pointers to functions to allow for
> different implementations.
> 

I debated this. These changes require more code churn with no apparent
benefit, but if people agree this is better I can make those changes.

> > +{
> > +    probe_xen();
> > +}
> > +
> >  static void map_shared_info(void)
> >  {
> >      mfn_t mfn;
> > @@ -249,10 +254,8 @@ static void init_evtchn(void)
> >      }
> >  }
> >  
> > -void __init hypervisor_setup(void)
> > +static void __init xen_setup(void)
> >  {
> > -    init_memmap();
> > -
> >      map_shared_info();
> >  
> >      set_vcpu_id();
> > @@ -277,13 +280,25 @@ void __init hypervisor_setup(void)
> >      init_evtchn();
> >  }
> >  
> > -void hypervisor_ap_setup(void)
> > +void __init hypervisor_setup(void)
> > +{
> > +    init_memmap();
> 
> I wonder, do you also require to map hypervisor data into the guest
> physmap when running on HyperV?
> 

Yes. There are a lot of comparable concepts in Hyper-V. For example,
there is a page called VP assist page which is more or less the same as
Xen's vcpuinfo. Its format, content and interfaces may be different, but
conceptually it is the same as vcpuinfo.

> Is there a way when running on HyperV to request unused physical
> address space ranges? What Xen currently does in init_memmap is quite
> crappy.
> 

Xen itself still needs to manage the address space, no?

I agree the rangeset code in xen.c isn't pretty, but it does the job and
is not too intrusive.

Wei.

> Thanks, Roger.
Jan Beulich Sept. 27, 2019, 11:39 a.m. UTC | #3
On 27.09.2019 13:30, Wei Liu wrote:
> On Wed, Sep 25, 2019 at 12:23:54PM +0200, Roger Pau Monné wrote:
>> Also it might be nice to introduce something like:
>>
>> enum guest_type {
>>     XEN_GUEST,
>> } guest_type;
>>
> 
> This works for me.
> 
>> So that you can add a switch here, even if the only case is Xen ATM. I
>> guess this will be done in a later patch, or introduce an
>> hypervisor_ops struct that contain pointers to functions to allow for
>> different implementations.
>>
> 
> I debated this. These changes require more code churn with no apparent
> benefit, but if people agree this is better I can make those changes.

Well, if the expectation is for the enum to grow (even just by one
further entry), then yes, I'd agree that a switch() would be nice.

Jan
Roger Pau Monné Sept. 27, 2019, 11:41 a.m. UTC | #4
On Fri, Sep 27, 2019 at 12:30:58PM +0100, Wei Liu wrote:
> On Wed, Sep 25, 2019 at 12:23:54PM +0200, Roger Pau Monné wrote:
> > > -void __init probe_hypervisor(void)
> > > +static void __init probe_xen(void)
> > 
> > While here I would rename to xen_probe, to match with the other
> > functions (ie: xen_setup below for example).
> 
> Sure. I can do that. I always thought that strange too.
> 
> > 
> > >  {
> > >      if ( xen_guest )
> > >          return;
> > > @@ -87,6 +87,11 @@ void __init probe_hypervisor(void)
> > >      xen_guest = true;
> > >  }
> > >  
> > > +void __init probe_hypervisor(void)
> > 
> > Shouldn't this live in a separate file, like guest/guest.c or some
> > such?
> > 
> 
> It's done in a later patch. I believe you've already seen it.
> 
> > Also it might be nice to introduce something like:
> > 
> > enum guest_type {
> >     XEN_GUEST,
> > } guest_type;
> > 
> 
> This works for me.
> 
> > So that you can add a switch here, even if the only case is Xen ATM. I
> > guess this will be done in a later patch, or introduce an
> > hypervisor_ops struct that contain pointers to functions to allow for
> > different implementations.
> > 
> 
> I debated this. These changes require more code churn with no apparent
> benefit, but if people agree this is better I can make those changes.
> 
> > > +{
> > > +    probe_xen();
> > > +}
> > > +
> > >  static void map_shared_info(void)
> > >  {
> > >      mfn_t mfn;
> > > @@ -249,10 +254,8 @@ static void init_evtchn(void)
> > >      }
> > >  }
> > >  
> > > -void __init hypervisor_setup(void)
> > > +static void __init xen_setup(void)
> > >  {
> > > -    init_memmap();
> > > -
> > >      map_shared_info();
> > >  
> > >      set_vcpu_id();
> > > @@ -277,13 +280,25 @@ void __init hypervisor_setup(void)
> > >      init_evtchn();
> > >  }
> > >  
> > > -void hypervisor_ap_setup(void)
> > > +void __init hypervisor_setup(void)
> > > +{
> > > +    init_memmap();
> > 
> > I wonder, do you also require to map hypervisor data into the guest
> > physmap when running on HyperV?
> > 
> 
> Yes. There are a lot of comparable concepts in Hyper-V. For example,
> there is a page called VP assist page which is more or less the same as
> Xen's vcpuinfo. Its format, content and interfaces may be different, but
> conceptually it is the same as vcpuinfo.
> 
> > Is there a way when running on HyperV to request unused physical
> > address space ranges? What Xen currently does in init_memmap is quite
> > crappy.
> > 
> 
> Xen itself still needs to manage the address space, no?
>
> I agree the rangeset code in xen.c isn't pretty, but it does the job and
> is not too intrusive.

The problem with the current approach is that the nested Xen has no
way of knowing whether those holes are actually unused, or are MMIO
regions used by devices for example.

Hence I wondered if HyperV had a way to signal to guests that a
physical address range is safe to be used as scratch mapping space. We
had spoken avoid introducing something in Xen to be able to report to
guests safe ranges in the physmap to map stuff.

Thanks, Roger.
Wei Liu Sept. 27, 2019, 12:46 p.m. UTC | #5
On Fri, Sep 27, 2019 at 01:41:59PM +0200, Roger Pau Monné wrote:
> > > 
> > > I wonder, do you also require to map hypervisor data into the guest
> > > physmap when running on HyperV?
> > > 
> > 
> > Yes. There are a lot of comparable concepts in Hyper-V. For example,
> > there is a page called VP assist page which is more or less the same as
> > Xen's vcpuinfo. Its format, content and interfaces may be different, but
> > conceptually it is the same as vcpuinfo.
> > 
> > > Is there a way when running on HyperV to request unused physical
> > > address space ranges? What Xen currently does in init_memmap is quite
> > > crappy.
> > > 
> > 
> > Xen itself still needs to manage the address space, no?
> >
> > I agree the rangeset code in xen.c isn't pretty, but it does the job and
> > is not too intrusive.
> 
> The problem with the current approach is that the nested Xen has no
> way of knowing whether those holes are actually unused, or are MMIO
> regions used by devices for example.
> 
> Hence I wondered if HyperV had a way to signal to guests that a
> physical address range is safe to be used as scratch mapping space. We
> had spoken avoid introducing something in Xen to be able to report to
> guests safe ranges in the physmap to map stuff.

AFAICT Hyper-V TLFS doesn't describe such functionality.

On the other hand, Hyper-V may not need this infrastructure at all
because it doesn't have grant table frame or shared info page. The most
likely outcome is in the next version the memmap stuff will be left to
Xen only until I find a use case for it.

Wei.

> 
> Thanks, Roger.
Wei Liu Sept. 27, 2019, 12:47 p.m. UTC | #6
On Fri, Sep 27, 2019 at 01:39:14PM +0200, Jan Beulich wrote:
> On 27.09.2019 13:30, Wei Liu wrote:
> > On Wed, Sep 25, 2019 at 12:23:54PM +0200, Roger Pau Monné wrote:
> >> Also it might be nice to introduce something like:
> >>
> >> enum guest_type {
> >>     XEN_GUEST,
> >> } guest_type;
> >>
> > 
> > This works for me.
> > 
> >> So that you can add a switch here, even if the only case is Xen ATM. I
> >> guess this will be done in a later patch, or introduce an
> >> hypervisor_ops struct that contain pointers to functions to allow for
> >> different implementations.
> >>
> > 
> > I debated this. These changes require more code churn with no apparent
> > benefit, but if people agree this is better I can make those changes.
> 
> Well, if the expectation is for the enum to grow (even just by one
> further entry), then yes, I'd agree that a switch() would be nice.

Not sure if you notice comments in a later email.

Do you prefer enum+switch solution or hypervisor_op struct?

Wei.

> 
> Jan
Jan Beulich Sept. 27, 2019, 12:56 p.m. UTC | #7
On 27.09.2019 14:47, Wei Liu wrote:
> On Fri, Sep 27, 2019 at 01:39:14PM +0200, Jan Beulich wrote:
>> On 27.09.2019 13:30, Wei Liu wrote:
>>> On Wed, Sep 25, 2019 at 12:23:54PM +0200, Roger Pau Monné wrote:
>>>> Also it might be nice to introduce something like:
>>>>
>>>> enum guest_type {
>>>>     XEN_GUEST,
>>>> } guest_type;
>>>>
>>>
>>> This works for me.
>>>
>>>> So that you can add a switch here, even if the only case is Xen ATM. I
>>>> guess this will be done in a later patch, or introduce an
>>>> hypervisor_ops struct that contain pointers to functions to allow for
>>>> different implementations.
>>>>
>>>
>>> I debated this. These changes require more code churn with no apparent
>>> benefit, but if people agree this is better I can make those changes.
>>
>> Well, if the expectation is for the enum to grow (even just by one
>> further entry), then yes, I'd agree that a switch() would be nice.
> 
> Not sure if you notice comments in a later email.
> 
> Do you prefer enum+switch solution or hypervisor_op struct?

Hard to tell without knowing how many switch() there would
end up being. The more of them, the better I'd like the ops
structure variant.

Jan
diff mbox series

Patch

diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c
index 78fc603996..f93c8fbd1c 100644
--- a/xen/arch/x86/guest/xen/xen.c
+++ b/xen/arch/x86/guest/xen/xen.c
@@ -67,7 +67,7 @@  static void __init find_xen_leaves(void)
     }
 }
 
-void __init probe_hypervisor(void)
+static void __init probe_xen(void)
 {
     if ( xen_guest )
         return;
@@ -87,6 +87,11 @@  void __init probe_hypervisor(void)
     xen_guest = true;
 }
 
+void __init probe_hypervisor(void)
+{
+    probe_xen();
+}
+
 static void map_shared_info(void)
 {
     mfn_t mfn;
@@ -249,10 +254,8 @@  static void init_evtchn(void)
     }
 }
 
-void __init hypervisor_setup(void)
+static void __init xen_setup(void)
 {
-    init_memmap();
-
     map_shared_info();
 
     set_vcpu_id();
@@ -277,13 +280,25 @@  void __init hypervisor_setup(void)
     init_evtchn();
 }
 
-void hypervisor_ap_setup(void)
+void __init hypervisor_setup(void)
+{
+    init_memmap();
+
+    xen_setup();
+}
+
+static void xen_ap_setup(void)
 {
     set_vcpu_id();
     map_vcpuinfo();
     init_evtchn();
 }
 
+void hypervisor_ap_setup(void)
+{
+    xen_ap_setup();
+}
+
 int hypervisor_alloc_unused_page(mfn_t *mfn)
 {
     unsigned long m;
@@ -307,7 +322,7 @@  static void ap_resume(void *unused)
     init_evtchn();
 }
 
-void hypervisor_resume(void)
+static void xen_resume(void)
 {
     /* Reset shared info page. */
     map_shared_info();
@@ -330,6 +345,11 @@  void hypervisor_resume(void)
         pv_console_init();
 }
 
+void hypervisor_resume(void)
+{
+    xen_resume();
+}
+
 /*
  * Local variables:
  * mode: C