diff mbox

[v2,06/30] x86/paging: introduce paging_set_allocation

Message ID 1474991845-27962-7-git-send-email-roger.pau@citrix.com (mailing list archive)
State New, archived
Headers show

Commit Message

Roger Pau Monné Sept. 27, 2016, 3:57 p.m. UTC
... and remove hap_set_alloc_for_pvh_dom0.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Tim Deegan <tim@xen.org>
---
Changes since RFC:
 - Make paging_set_allocation preemtable.
 - Move comments.
---
 xen/arch/x86/domain_build.c     | 17 +++++++++++++----
 xen/arch/x86/mm/hap/hap.c       | 14 +-------------
 xen/arch/x86/mm/paging.c        | 16 ++++++++++++++++
 xen/arch/x86/mm/shadow/common.c |  7 +------
 xen/include/asm-x86/hap.h       |  2 +-
 xen/include/asm-x86/paging.h    |  7 +++++++
 xen/include/asm-x86/shadow.h    |  8 ++++++++
 7 files changed, 47 insertions(+), 24 deletions(-)

Comments

Jan Beulich Sept. 29, 2016, 10:51 a.m. UTC | #1
>>> On 27.09.16 at 17:57, <roger.pau@citrix.com> wrote:
> @@ -1383,15 +1382,25 @@ int __init construct_dom0(
>                           nr_pages);
>      }
>  
> -    if ( is_pvh_domain(d) )
> -        hap_set_alloc_for_pvh_dom0(d, dom0_paging_pages(d, nr_pages));
> -
>      /*
>       * We enable paging mode again so guest_physmap_add_page will do the
>       * right thing for us.
>       */

I'm afraid you render this comment stale - please adjust it accordingly.

> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -954,6 +954,22 @@ void paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
>          safe_write_pte(p, new);
>  }
>  
> +int paging_set_allocation(struct domain *d, unsigned long pages, int *preempted)

Since you need to touch the other two functions anyway, please
make the last parameter bool * here and there.

Jan
Roger Pau Monné Sept. 29, 2016, 2:51 p.m. UTC | #2
On Thu, Sep 29, 2016 at 04:51:42AM -0600, Jan Beulich wrote:
> >>> On 27.09.16 at 17:57, <roger.pau@citrix.com> wrote:
> > @@ -1383,15 +1382,25 @@ int __init construct_dom0(
> >                           nr_pages);
> >      }
> >  
> > -    if ( is_pvh_domain(d) )
> > -        hap_set_alloc_for_pvh_dom0(d, dom0_paging_pages(d, nr_pages));
> > -
> >      /*
> >       * We enable paging mode again so guest_physmap_add_page will do the
> >       * right thing for us.
> >       */
> 
> I'm afraid you render this comment stale - please adjust it accordingly.

Not AFAICT, this comment is referring to the next line, which is:

d->arch.paging.mode = save_pvh_pg_mode;

The classic PVH domain builder contains quite a lot of craziness and 
disables paging modes at certain points by playing with d->arch.paging.mode.

> > --- a/xen/arch/x86/mm/paging.c
> > +++ b/xen/arch/x86/mm/paging.c
> > @@ -954,6 +954,22 @@ void paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
> >          safe_write_pte(p, new);
> >  }
> >  
> > +int paging_set_allocation(struct domain *d, unsigned long pages, int *preempted)
> 
> Since you need to touch the other two functions anyway, please
> make the last parameter bool * here and there.

OK, that's fine.

Thanks, Roger.
Jan Beulich Sept. 29, 2016, 4:12 p.m. UTC | #3
>>> On 29.09.16 at 16:51, <roger.pau@citrix.com> wrote:
> On Thu, Sep 29, 2016 at 04:51:42AM -0600, Jan Beulich wrote:
>> >>> On 27.09.16 at 17:57, <roger.pau@citrix.com> wrote:
>> > @@ -1383,15 +1382,25 @@ int __init construct_dom0(
>> >                           nr_pages);
>> >      }
>> >  
>> > -    if ( is_pvh_domain(d) )
>> > -        hap_set_alloc_for_pvh_dom0(d, dom0_paging_pages(d, nr_pages));
>> > -
>> >      /*
>> >       * We enable paging mode again so guest_physmap_add_page will do the
>> >       * right thing for us.
>> >       */
>> 
>> I'm afraid you render this comment stale - please adjust it accordingly.
> 
> Not AFAICT, this comment is referring to the next line, which is:
> 
> d->arch.paging.mode = save_pvh_pg_mode;
> 
> The classic PVH domain builder contains quite a lot of craziness and 
> disables paging modes at certain points by playing with d->arch.paging.mode.

Right, but your addition below that line now also depends on that
restore, afaict.

Jan
Roger Pau Monné Sept. 29, 2016, 4:57 p.m. UTC | #4
On Thu, Sep 29, 2016 at 10:12:12AM -0600, Jan Beulich wrote:
> >>> On 29.09.16 at 16:51, <roger.pau@citrix.com> wrote:
> > On Thu, Sep 29, 2016 at 04:51:42AM -0600, Jan Beulich wrote:
> >> >>> On 27.09.16 at 17:57, <roger.pau@citrix.com> wrote:
> >> > @@ -1383,15 +1382,25 @@ int __init construct_dom0(
> >> >                           nr_pages);
> >> >      }
> >> >  
> >> > -    if ( is_pvh_domain(d) )
> >> > -        hap_set_alloc_for_pvh_dom0(d, dom0_paging_pages(d, nr_pages));
> >> > -
> >> >      /*
> >> >       * We enable paging mode again so guest_physmap_add_page will do the
> >> >       * right thing for us.
> >> >       */
> >> 
> >> I'm afraid you render this comment stale - please adjust it accordingly.
> > 
> > Not AFAICT, this comment is referring to the next line, which is:
> > 
> > d->arch.paging.mode = save_pvh_pg_mode;
> > 
> > The classic PVH domain builder contains quite a lot of craziness and 
> > disables paging modes at certain points by playing with d->arch.paging.mode.
> 
> Right, but your addition below that line now also depends on that
> restore, afaict.

Yes, that's completely right, sorry for overlooking it. I've expanded the 
comment to also reference paging_set_allocation (or else we would hit an 
assert).

Roger.
George Dunlap Sept. 30, 2016, 5 p.m. UTC | #5
On 27/09/16 16:57, Roger Pau Monne wrote:
> ... and remove hap_set_alloc_for_pvh_dom0.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> Acked-by: Tim Deegan <tim@xen.org>

Acked-by: George Dunlap <george.dunlap@citrix.com>

> ---
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Tim Deegan <tim@xen.org>
> ---
> Changes since RFC:
>  - Make paging_set_allocation preemtable.
>  - Move comments.
> ---
>  xen/arch/x86/domain_build.c     | 17 +++++++++++++----
>  xen/arch/x86/mm/hap/hap.c       | 14 +-------------
>  xen/arch/x86/mm/paging.c        | 16 ++++++++++++++++
>  xen/arch/x86/mm/shadow/common.c |  7 +------
>  xen/include/asm-x86/hap.h       |  2 +-
>  xen/include/asm-x86/paging.h    |  7 +++++++
>  xen/include/asm-x86/shadow.h    |  8 ++++++++
>  7 files changed, 47 insertions(+), 24 deletions(-)
> 
> diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
> index 0a02d65..04d6cb0 100644
> --- a/xen/arch/x86/domain_build.c
> +++ b/xen/arch/x86/domain_build.c
> @@ -35,7 +35,6 @@
>  #include <asm/setup.h>
>  #include <asm/bzimage.h> /* for bzimage_parse */
>  #include <asm/io_apic.h>
> -#include <asm/hap.h>
>  #include <asm/hpet.h>
>  
>  #include <public/version.h>
> @@ -1383,15 +1382,25 @@ int __init construct_dom0(
>                           nr_pages);
>      }
>  
> -    if ( is_pvh_domain(d) )
> -        hap_set_alloc_for_pvh_dom0(d, dom0_paging_pages(d, nr_pages));
> -
>      /*
>       * We enable paging mode again so guest_physmap_add_page will do the
>       * right thing for us.
>       */
>      d->arch.paging.mode = save_pvh_pg_mode;
>  
> +    if ( is_pvh_domain(d) )
> +    {
> +        int preempted;
> +
> +        do {
> +            preempted = 0;
> +            paging_set_allocation(d, dom0_paging_pages(d, nr_pages),
> +                                  &preempted);
> +            process_pending_softirqs();
> +        } while ( preempted );
> +    }
> +
> +
>      /* Write the phys->machine and machine->phys table entries. */
>      for ( pfn = 0; pfn < count; pfn++ )
>      {
> diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
> index 2dc82f5..4420e4e 100644
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -334,7 +334,7 @@ hap_get_allocation(struct domain *d)
>  
>  /* Set the pool of pages to the required number of pages.
>   * Returns 0 for success, non-zero for failure. */
> -static int
> +int
>  hap_set_allocation(struct domain *d, unsigned long pages, int *preempted)
>  {
>      struct page_info *pg;
> @@ -640,18 +640,6 @@ int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
>      }
>  }
>  
> -void __init hap_set_alloc_for_pvh_dom0(struct domain *d,
> -                                       unsigned long hap_pages)
> -{
> -    int rc;
> -
> -    paging_lock(d);
> -    rc = hap_set_allocation(d, hap_pages, NULL);
> -    paging_unlock(d);
> -
> -    BUG_ON(rc);
> -}
> -
>  static const struct paging_mode hap_paging_real_mode;
>  static const struct paging_mode hap_paging_protected_mode;
>  static const struct paging_mode hap_paging_pae_mode;
> diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
> index cc44682..2717bd3 100644
> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -954,6 +954,22 @@ void paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
>          safe_write_pte(p, new);
>  }
>  
> +int paging_set_allocation(struct domain *d, unsigned long pages, int *preempted)
> +{
> +    int rc;
> +
> +    ASSERT(paging_mode_enabled(d));
> +
> +    paging_lock(d);
> +    if ( hap_enabled(d) )
> +        rc = hap_set_allocation(d, pages, preempted);
> +    else
> +        rc = sh_set_allocation(d, pages, preempted);
> +    paging_unlock(d);
> +
> +    return rc;
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> index d3cc2cc..53ffe1a 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -1609,12 +1609,7 @@ shadow_free_p2m_page(struct domain *d, struct page_info *pg)
>      paging_unlock(d);
>  }
>  
> -/* Set the pool of shadow pages to the required number of pages.
> - * Input will be rounded up to at least shadow_min_acceptable_pages(),
> - * plus space for the p2m table.
> - * Returns 0 for success, non-zero for failure. */
> -static int sh_set_allocation(struct domain *d, unsigned long pages,
> -                             int *preempted)
> +int sh_set_allocation(struct domain *d, unsigned long pages, int *preempted)
>  {
>      struct page_info *sp;
>      unsigned int lower_bound;
> diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
> index c613836..9d59430 100644
> --- a/xen/include/asm-x86/hap.h
> +++ b/xen/include/asm-x86/hap.h
> @@ -46,7 +46,7 @@ int   hap_track_dirty_vram(struct domain *d,
>                             XEN_GUEST_HANDLE_64(uint8) dirty_bitmap);
>  
>  extern const struct paging_mode *hap_paging_get_mode(struct vcpu *);
> -void hap_set_alloc_for_pvh_dom0(struct domain *d, unsigned long num_pages);
> +int hap_set_allocation(struct domain *d, unsigned long pages, int *preempted);
>  
>  #endif /* XEN_HAP_H */
>  
> diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
> index 56eef6b..c2d60d3 100644
> --- a/xen/include/asm-x86/paging.h
> +++ b/xen/include/asm-x86/paging.h
> @@ -347,6 +347,13 @@ void pagetable_dying(struct domain *d, paddr_t gpa);
>  void paging_dump_domain_info(struct domain *d);
>  void paging_dump_vcpu_info(struct vcpu *v);
>  
> +/* Set the pool of shadow pages to the required number of pages.
> + * Input might be rounded up to at minimum amount of pages, plus
> + * space for the p2m table.
> + * Returns 0 for success, non-zero for failure. */
> +int paging_set_allocation(struct domain *d, unsigned long pages,
> +                          int *preempted);
> +
>  #endif /* XEN_PAGING_H */
>  
>  /*
> diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
> index 6d0aefb..f0e2227 100644
> --- a/xen/include/asm-x86/shadow.h
> +++ b/xen/include/asm-x86/shadow.h
> @@ -83,6 +83,12 @@ void sh_remove_shadows(struct domain *d, mfn_t gmfn, int fast, int all);
>  /* Discard _all_ mappings from the domain's shadows. */
>  void shadow_blow_tables_per_domain(struct domain *d);
>  
> +/* Set the pool of shadow pages to the required number of pages.
> + * Input will be rounded up to at least shadow_min_acceptable_pages(),
> + * plus space for the p2m table.
> + * Returns 0 for success, non-zero for failure. */
> +int sh_set_allocation(struct domain *d, unsigned long pages, int *preempted);
> +
>  #else /* !CONFIG_SHADOW_PAGING */
>  
>  #define shadow_teardown(d, p) ASSERT(is_pv_domain(d))
> @@ -91,6 +97,8 @@ void shadow_blow_tables_per_domain(struct domain *d);
>      ({ ASSERT(is_pv_domain(d)); -EOPNOTSUPP; })
>  #define shadow_track_dirty_vram(d, begin_pfn, nr, bitmap) \
>      ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; })
> +#define sh_set_allocation(d, pages, preempted) \
> +    ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; })
>  
>  static inline void sh_remove_shadows(struct domain *d, mfn_t gmfn,
>                                       bool_t fast, bool_t all) {}
>
diff mbox

Patch

diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index 0a02d65..04d6cb0 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -35,7 +35,6 @@ 
 #include <asm/setup.h>
 #include <asm/bzimage.h> /* for bzimage_parse */
 #include <asm/io_apic.h>
-#include <asm/hap.h>
 #include <asm/hpet.h>
 
 #include <public/version.h>
@@ -1383,15 +1382,25 @@  int __init construct_dom0(
                          nr_pages);
     }
 
-    if ( is_pvh_domain(d) )
-        hap_set_alloc_for_pvh_dom0(d, dom0_paging_pages(d, nr_pages));
-
     /*
      * We enable paging mode again so guest_physmap_add_page will do the
      * right thing for us.
      */
     d->arch.paging.mode = save_pvh_pg_mode;
 
+    if ( is_pvh_domain(d) )
+    {
+        int preempted;
+
+        do {
+            preempted = 0;
+            paging_set_allocation(d, dom0_paging_pages(d, nr_pages),
+                                  &preempted);
+            process_pending_softirqs();
+        } while ( preempted );
+    }
+
+
     /* Write the phys->machine and machine->phys table entries. */
     for ( pfn = 0; pfn < count; pfn++ )
     {
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 2dc82f5..4420e4e 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -334,7 +334,7 @@  hap_get_allocation(struct domain *d)
 
 /* Set the pool of pages to the required number of pages.
  * Returns 0 for success, non-zero for failure. */
-static int
+int
 hap_set_allocation(struct domain *d, unsigned long pages, int *preempted)
 {
     struct page_info *pg;
@@ -640,18 +640,6 @@  int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
     }
 }
 
-void __init hap_set_alloc_for_pvh_dom0(struct domain *d,
-                                       unsigned long hap_pages)
-{
-    int rc;
-
-    paging_lock(d);
-    rc = hap_set_allocation(d, hap_pages, NULL);
-    paging_unlock(d);
-
-    BUG_ON(rc);
-}
-
 static const struct paging_mode hap_paging_real_mode;
 static const struct paging_mode hap_paging_protected_mode;
 static const struct paging_mode hap_paging_pae_mode;
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index cc44682..2717bd3 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -954,6 +954,22 @@  void paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
         safe_write_pte(p, new);
 }
 
+int paging_set_allocation(struct domain *d, unsigned long pages, int *preempted)
+{
+    int rc;
+
+    ASSERT(paging_mode_enabled(d));
+
+    paging_lock(d);
+    if ( hap_enabled(d) )
+        rc = hap_set_allocation(d, pages, preempted);
+    else
+        rc = sh_set_allocation(d, pages, preempted);
+    paging_unlock(d);
+
+    return rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index d3cc2cc..53ffe1a 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1609,12 +1609,7 @@  shadow_free_p2m_page(struct domain *d, struct page_info *pg)
     paging_unlock(d);
 }
 
-/* Set the pool of shadow pages to the required number of pages.
- * Input will be rounded up to at least shadow_min_acceptable_pages(),
- * plus space for the p2m table.
- * Returns 0 for success, non-zero for failure. */
-static int sh_set_allocation(struct domain *d, unsigned long pages,
-                             int *preempted)
+int sh_set_allocation(struct domain *d, unsigned long pages, int *preempted)
 {
     struct page_info *sp;
     unsigned int lower_bound;
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index c613836..9d59430 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -46,7 +46,7 @@  int   hap_track_dirty_vram(struct domain *d,
                            XEN_GUEST_HANDLE_64(uint8) dirty_bitmap);
 
 extern const struct paging_mode *hap_paging_get_mode(struct vcpu *);
-void hap_set_alloc_for_pvh_dom0(struct domain *d, unsigned long num_pages);
+int hap_set_allocation(struct domain *d, unsigned long pages, int *preempted);
 
 #endif /* XEN_HAP_H */
 
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index 56eef6b..c2d60d3 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -347,6 +347,13 @@  void pagetable_dying(struct domain *d, paddr_t gpa);
 void paging_dump_domain_info(struct domain *d);
 void paging_dump_vcpu_info(struct vcpu *v);
 
+/* Set the pool of shadow pages to the required number of pages.
+ * Input might be rounded up to at minimum amount of pages, plus
+ * space for the p2m table.
+ * Returns 0 for success, non-zero for failure. */
+int paging_set_allocation(struct domain *d, unsigned long pages,
+                          int *preempted);
+
 #endif /* XEN_PAGING_H */
 
 /*
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 6d0aefb..f0e2227 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -83,6 +83,12 @@  void sh_remove_shadows(struct domain *d, mfn_t gmfn, int fast, int all);
 /* Discard _all_ mappings from the domain's shadows. */
 void shadow_blow_tables_per_domain(struct domain *d);
 
+/* Set the pool of shadow pages to the required number of pages.
+ * Input will be rounded up to at least shadow_min_acceptable_pages(),
+ * plus space for the p2m table.
+ * Returns 0 for success, non-zero for failure. */
+int sh_set_allocation(struct domain *d, unsigned long pages, int *preempted);
+
 #else /* !CONFIG_SHADOW_PAGING */
 
 #define shadow_teardown(d, p) ASSERT(is_pv_domain(d))
@@ -91,6 +97,8 @@  void shadow_blow_tables_per_domain(struct domain *d);
     ({ ASSERT(is_pv_domain(d)); -EOPNOTSUPP; })
 #define shadow_track_dirty_vram(d, begin_pfn, nr, bitmap) \
     ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; })
+#define sh_set_allocation(d, pages, preempted) \
+    ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; })
 
 static inline void sh_remove_shadows(struct domain *d, mfn_t gmfn,
                                      bool_t fast, bool_t all) {}