diff mbox series

[2/3] xmalloc: don't evaluate ADD_REGION without holding the pool lock

Message ID 20190702163840.2107-3-paul.durrant@citrix.com (mailing list archive)
State Superseded
Headers show
Series xmalloc patches | expand

Commit Message

Paul Durrant July 2, 2019, 4:38 p.m. UTC
It's not safe to add a region without holding the lock, but this is
exactly what may happen if two threads race entering xmem_pool_alloc()
before the init_region is set.

This patch instead checks for init_region under lock, drops the lock if it
needs to allocate a page, takes the lock, adds the region and then confirms
init_region is still unset before pointing it at the newly added region.
Thus, it is possible that a race may cause an extra region to be added,
but there will be no pool metadata corruption.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wl@xen.org>
---
 xen/common/xmalloc_tlsf.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

Comments

Jan Beulich July 3, 2019, 9:56 a.m. UTC | #1
On 02.07.2019 18:38, Paul Durrant wrote:
> --- a/xen/common/xmalloc_tlsf.c
> +++ b/xen/common/xmalloc_tlsf.c
> @@ -380,18 +380,22 @@ void *xmem_pool_alloc(unsigned long size, struct xmem_pool *pool)
>      int fl, sl;
>      unsigned long tmp_size;
>   
> +    spin_lock(&pool->lock);
>      if ( pool->init_region == NULL )
>      {
> +       spin_unlock(&pool->lock);
>         if ( (region = pool->get_mem(pool->init_size)) == NULL )
>             goto out;
> +       spin_lock(&pool->lock);
>         ADD_REGION(region, pool->init_size, pool);
> -       pool->init_region = region;
> +       /* Re-check since the lock was dropped */
> +       if ( pool->init_region == NULL )
> +           pool->init_region = region;
>      }

Instead of this, how about deleting the init_region field?
It's not really used anywhere. I'm not going to exclude that
functions like FIND_SUITABLE_BLOCK() expect _some_ region to
be there in the pool, but that still wouldn't require
tracking which one was the first to get allocated. A check
like that in xmem_pool_destroy() would then do here to make
sure at least one region is there.

Jan
Paul Durrant July 3, 2019, 10:13 a.m. UTC | #2
> -----Original Message-----
> From: Jan Beulich <JBeulich@suse.com>
> Sent: 03 July 2019 10:57
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; Julien Grall <julien.grall@arm.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian Jackson
> <Ian.Jackson@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>; Wei Liu <wl@xen.org>
> Subject: Re: [PATCH 2/3] xmalloc: don't evaluate ADD_REGION without holding the pool lock
> 
> On 02.07.2019 18:38, Paul Durrant wrote:
> > --- a/xen/common/xmalloc_tlsf.c
> > +++ b/xen/common/xmalloc_tlsf.c
> > @@ -380,18 +380,22 @@ void *xmem_pool_alloc(unsigned long size, struct xmem_pool *pool)
> >      int fl, sl;
> >      unsigned long tmp_size;
> >
> > +    spin_lock(&pool->lock);
> >      if ( pool->init_region == NULL )
> >      {
> > +       spin_unlock(&pool->lock);
> >         if ( (region = pool->get_mem(pool->init_size)) == NULL )
> >             goto out;
> > +       spin_lock(&pool->lock);
> >         ADD_REGION(region, pool->init_size, pool);
> > -       pool->init_region = region;
> > +       /* Re-check since the lock was dropped */
> > +       if ( pool->init_region == NULL )
> > +           pool->init_region = region;
> >      }
> 
> Instead of this, how about deleting the init_region field?
> It's not really used anywhere. I'm not going to exclude that
> functions like FIND_SUITABLE_BLOCK() expect _some_ region to
> be there in the pool, but that still wouldn't require
> tracking which one was the first to get allocated. A check
> like that in xmem_pool_destroy() would then do here to make
> sure at least one region is there.
> 

Ok, I can do it that way instead... not that anything calls xmem_pool_destroy at the moment anyway.

  Paul

> Jan
diff mbox series

Patch

diff --git a/xen/common/xmalloc_tlsf.c b/xen/common/xmalloc_tlsf.c
index 6d889b7bdc..71597c3590 100644
--- a/xen/common/xmalloc_tlsf.c
+++ b/xen/common/xmalloc_tlsf.c
@@ -380,18 +380,22 @@  void *xmem_pool_alloc(unsigned long size, struct xmem_pool *pool)
     int fl, sl;
     unsigned long tmp_size;
 
+    spin_lock(&pool->lock);
     if ( pool->init_region == NULL )
     {
+        spin_unlock(&pool->lock);
         if ( (region = pool->get_mem(pool->init_size)) == NULL )
             goto out;
+        spin_lock(&pool->lock);
         ADD_REGION(region, pool->init_size, pool);
-        pool->init_region = region;
+        /* Re-check since the lock was dropped */
+        if ( pool->init_region == NULL )
+            pool->init_region = region;
     }
 
     size = (size < MIN_BLOCK_SIZE) ? MIN_BLOCK_SIZE : ROUNDUP_SIZE(size);
     /* Rounding up the requested size and calculating fl and sl */
 
-    spin_lock(&pool->lock);
  retry_find:
     MAPPING_SEARCH(&size, &fl, &sl);