diff mbox series

[v5,01/10] mem_sharing/fork: do not attempt to populate vcpu_info page

Message ID 20231002151127.71020-2-roger.pau@citrix.com (mailing list archive)
State New, archived
Headers show
Series runstate/time area registration by (guest) physical address | expand

Commit Message

Roger Pau Monné Oct. 2, 2023, 3:11 p.m. UTC
Instead let map_vcpu_info() and it's call to get_page_from_gfn()
populate the page in the child as needed.  Also remove the bogus
copy_domain_page(): should be placed before the call to map_vcpu_info(),
as the later can update the contents of the vcpu_info page.

Note that this eliminates a bug in copy_vcpu_settings(): The function did
allocate a new page regardless of the GFN already having a mapping, thus in
particular breaking the case of two vCPU-s having their info areas on the same
page.

Fixes: 41548c5472a3 ('mem_sharing: VM forking')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Only build tested.
---
Changes since v4:
 - New in this version.
---
 xen/arch/x86/mm/mem_sharing.c | 36 ++++++-----------------------------
 1 file changed, 6 insertions(+), 30 deletions(-)

Comments

Roger Pau Monné Oct. 3, 2023, 7:15 a.m. UTC | #1
On Mon, Oct 02, 2023 at 01:05:24PM -0400, Tamas K Lengyel wrote:
> On Mon, Oct 2, 2023 at 11:12 AM Roger Pau Monne <roger.pau@citrix.com> wrote:
> >
> > Instead let map_vcpu_info() and it's call to get_page_from_gfn()
> > populate the page in the child as needed.  Also remove the bogus
> > copy_domain_page(): should be placed before the call to map_vcpu_info(),
> > as the later can update the contents of the vcpu_info page.
> >
> > Note that this eliminates a bug in copy_vcpu_settings(): The function did
> > allocate a new page regardless of the GFN already having a mapping, thus in
> > particular breaking the case of two vCPU-s having their info areas on the same
> > page.
> >
> > Fixes: 41548c5472a3 ('mem_sharing: VM forking')
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > Only build tested.
> 
> So this will allocate & map a page in, but its content won't be
> matching that of the parent. Is that not an issue? If there is no
> state on this page the guest is going to rely on, it looks fine to me.
> But if the guest may expect a certain state to be already setup on
> this page we could run into weird issues in the fork, no?

mem_sharing_fork_page() will do the copy from the parent, and this is
what gets called from get_page_from_gfn().

map_vcpu_info() might change some of the vcpu_info contents when
compared to the parent, but such changes could also happen without the
fork, and the guest should be able to handle it just like any update
to vcpu_info.  There will likely be an spurious event channel upcall
injected depending on the delivery method selected by the guest, but
again this could also happen without the fork taking place.

Thanks, Roger.
Julien Grall Oct. 5, 2023, 12:10 p.m. UTC | #2
Hi,

While preparing to commit this series, I have noticed that there are no 
Acked-by/Reviewed-by from Tamas (or at least not present in my inbox).

@Tamas, can you provide one?

Cheers,

On 02/10/2023 16:11, Roger Pau Monne wrote:
> Instead let map_vcpu_info() and it's call to get_page_from_gfn()
> populate the page in the child as needed.  Also remove the bogus
> copy_domain_page(): should be placed before the call to map_vcpu_info(),
> as the later can update the contents of the vcpu_info page.
> 
> Note that this eliminates a bug in copy_vcpu_settings(): The function did
> allocate a new page regardless of the GFN already having a mapping, thus in
> particular breaking the case of two vCPU-s having their info areas on the same
> page.
> 
> Fixes: 41548c5472a3 ('mem_sharing: VM forking')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Only build tested.
> ---
> Changes since v4:
>   - New in this version.
> ---
>   xen/arch/x86/mm/mem_sharing.c | 36 ++++++-----------------------------
>   1 file changed, 6 insertions(+), 30 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index ae5366d4476e..5f8f1fb4d871 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1689,48 +1689,24 @@ static int copy_vcpu_settings(struct domain *cd, const struct domain *d)
>       unsigned int i;
>       struct p2m_domain *p2m = p2m_get_hostp2m(cd);
>       int ret = -EINVAL;
> +    mfn_t vcpu_info_mfn;
>   
>       for ( i = 0; i < cd->max_vcpus; i++ )
>       {
>           struct vcpu *d_vcpu = d->vcpu[i];
>           struct vcpu *cd_vcpu = cd->vcpu[i];
> -        mfn_t vcpu_info_mfn;
>   
>           if ( !d_vcpu || !cd_vcpu )
>               continue;
>   
> -        /* Copy & map in the vcpu_info page if the guest uses one */
> +        /* Map in the vcpu_info page if the guest uses one */
>           vcpu_info_mfn = d_vcpu->vcpu_info_mfn;
>           if ( !mfn_eq(vcpu_info_mfn, INVALID_MFN) )
>           {
> -            mfn_t new_vcpu_info_mfn = cd_vcpu->vcpu_info_mfn;
> -
> -            /* Allocate & map the page for it if it hasn't been already */
> -            if ( mfn_eq(new_vcpu_info_mfn, INVALID_MFN) )
> -            {
> -                gfn_t gfn = mfn_to_gfn(d, vcpu_info_mfn);
> -                unsigned long gfn_l = gfn_x(gfn);
> -                struct page_info *page;
> -
> -                if ( !(page = alloc_domheap_page(cd, 0)) )
> -                    return -ENOMEM;
> -
> -                new_vcpu_info_mfn = page_to_mfn(page);
> -                set_gpfn_from_mfn(mfn_x(new_vcpu_info_mfn), gfn_l);
> -
> -                ret = p2m->set_entry(p2m, gfn, new_vcpu_info_mfn,
> -                                     PAGE_ORDER_4K, p2m_ram_rw,
> -                                     p2m->default_access, -1);
> -                if ( ret )
> -                    return ret;
> -
> -                ret = map_vcpu_info(cd_vcpu, gfn_l,
> -                                    PAGE_OFFSET(d_vcpu->vcpu_info));
> -                if ( ret )
> -                    return ret;
> -            }
> -
> -            copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);
> +            ret = map_vcpu_info(cd_vcpu, mfn_to_gfn(d, vcpu_info_mfn),
> +                                PAGE_OFFSET(d_vcpu->vcpu_info));
> +            if ( ret )
> +                return ret;
>           }
>   
>           ret = copy_vpmu(d_vcpu, cd_vcpu);
Roger Pau Monné Oct. 5, 2023, 12:42 p.m. UTC | #3
On Tue, Oct 03, 2023 at 09:46:13AM -0400, Tamas K Lengyel wrote:
> On Mon, Oct 2, 2023 at 11:12 AM Roger Pau Monne <roger.pau@citrix.com> wrote:
> >
> > Instead let map_vcpu_info() and it's call to get_page_from_gfn()
> > populate the page in the child as needed.  Also remove the bogus
> > copy_domain_page(): should be placed before the call to map_vcpu_info(),
> > as the later can update the contents of the vcpu_info page.
> >
> > Note that this eliminates a bug in copy_vcpu_settings(): The function did
> > allocate a new page regardless of the GFN already having a mapping, thus in
> > particular breaking the case of two vCPU-s having their info areas on the same
> > page.
> >
> > Fixes: 41548c5472a3 ('mem_sharing: VM forking')
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Acked-by: Tamas K Lengyel <tamas@tklengyel.com>

Thanks, Roger.
George Dunlap Oct. 5, 2023, 12:42 p.m. UTC | #4
On Thu, Oct 5, 2023 at 1:11 PM Julien Grall <julien@xen.org> wrote:
>
> Hi,
>
> While preparing to commit this series, I have noticed that there are no
> Acked-by/Reviewed-by from Tamas (or at least not present in my inbox).
>
> @Tamas, can you provide one?

I see an "Acked-by" from Tamas two days ago.

 -George
Julien Grall Oct. 5, 2023, 12:47 p.m. UTC | #5
Hi George,

On 05/10/2023 13:42, George Dunlap wrote:
> On Thu, Oct 5, 2023 at 1:11 PM Julien Grall <julien@xen.org> wrote:
>>
>> Hi,
>>
>> While preparing to commit this series, I have noticed that there are no
>> Acked-by/Reviewed-by from Tamas (or at least not present in my inbox).
>>
>> @Tamas, can you provide one?
> 
> I see an "Acked-by" from Tamas two days ago.

Sadly, it is also not on lore.kernel.org or our xenproject mail archives.

In [1], Tamas pointed out he had some e-mail trouble. So anything sent 
by Tamas before last Tuesday is not present in my inbox or any mail 
archives.

Not clear why the Citrix folks received it.

Cheers,

[1] 
https://lore.kernel.org/all/CABfawhn0vH6rS8-SJQJVZtto2HA61By1bCG3w9bJMJR3x+rXsg@mail.gmail.com/
Tamas K Lengyel Oct. 5, 2023, 1:15 p.m. UTC | #6
On Mon, Oct 2, 2023 at 11:12 AM Roger Pau Monne <roger.pau@citrix.com> wrote:
>
> Instead let map_vcpu_info() and it's call to get_page_from_gfn()
> populate the page in the child as needed.  Also remove the bogus
> copy_domain_page(): should be placed before the call to map_vcpu_info(),
> as the later can update the contents of the vcpu_info page.
>
> Note that this eliminates a bug in copy_vcpu_settings(): The function did
> allocate a new page regardless of the GFN already having a mapping, thus in
> particular breaking the case of two vCPU-s having their info areas on the same
> page.
>
> Fixes: 41548c5472a3 ('mem_sharing: VM forking')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Re-sending due to mailserver issues:

Acked-by: Tamas K Lengyel <tamas@tklengyel.com>
diff mbox series

Patch

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index ae5366d4476e..5f8f1fb4d871 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1689,48 +1689,24 @@  static int copy_vcpu_settings(struct domain *cd, const struct domain *d)
     unsigned int i;
     struct p2m_domain *p2m = p2m_get_hostp2m(cd);
     int ret = -EINVAL;
+    mfn_t vcpu_info_mfn;
 
     for ( i = 0; i < cd->max_vcpus; i++ )
     {
         struct vcpu *d_vcpu = d->vcpu[i];
         struct vcpu *cd_vcpu = cd->vcpu[i];
-        mfn_t vcpu_info_mfn;
 
         if ( !d_vcpu || !cd_vcpu )
             continue;
 
-        /* Copy & map in the vcpu_info page if the guest uses one */
+        /* Map in the vcpu_info page if the guest uses one */
         vcpu_info_mfn = d_vcpu->vcpu_info_mfn;
         if ( !mfn_eq(vcpu_info_mfn, INVALID_MFN) )
         {
-            mfn_t new_vcpu_info_mfn = cd_vcpu->vcpu_info_mfn;
-
-            /* Allocate & map the page for it if it hasn't been already */
-            if ( mfn_eq(new_vcpu_info_mfn, INVALID_MFN) )
-            {
-                gfn_t gfn = mfn_to_gfn(d, vcpu_info_mfn);
-                unsigned long gfn_l = gfn_x(gfn);
-                struct page_info *page;
-
-                if ( !(page = alloc_domheap_page(cd, 0)) )
-                    return -ENOMEM;
-
-                new_vcpu_info_mfn = page_to_mfn(page);
-                set_gpfn_from_mfn(mfn_x(new_vcpu_info_mfn), gfn_l);
-
-                ret = p2m->set_entry(p2m, gfn, new_vcpu_info_mfn,
-                                     PAGE_ORDER_4K, p2m_ram_rw,
-                                     p2m->default_access, -1);
-                if ( ret )
-                    return ret;
-
-                ret = map_vcpu_info(cd_vcpu, gfn_l,
-                                    PAGE_OFFSET(d_vcpu->vcpu_info));
-                if ( ret )
-                    return ret;
-            }
-
-            copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);
+            ret = map_vcpu_info(cd_vcpu, mfn_to_gfn(d, vcpu_info_mfn),
+                                PAGE_OFFSET(d_vcpu->vcpu_info));
+            if ( ret )
+                return ret;
         }
 
         ret = copy_vpmu(d_vcpu, cd_vcpu);