diff mbox series

xen/page_alloc: Don't hold the heap_lock when clearing PGC_need_scrub

Message ID 20210406193032.16976-1-julien@xen.org (mailing list archive)
State New, archived
Headers show
Series xen/page_alloc: Don't hold the heap_lock when clearing PGC_need_scrub | expand

Commit Message

Julien Grall April 6, 2021, 7:30 p.m. UTC
From: Julien Grall <jgrall@amazon.com>

Currently, the heap_lock is held when clearing PGC_need_scrub in
alloc_heap_pages(). However, this is unnecessary because the only caller
(mark_page_offline()) that can concurrently modify the count_info is
using cmpxchg() in a loop.

Therefore, rework the code to avoid holding the heap_lock and use
test_and_clear_bit() instead.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/common/page_alloc.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

Comments

Jan Beulich April 7, 2021, 9:31 a.m. UTC | #1
On 06.04.2021 21:30, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Currently, the heap_lock is held when clearing PGC_need_scrub in
> alloc_heap_pages(). However, this is unnecessary because the only caller
> (mark_page_offline()) that can concurrently modify the count_info is
> using cmpxchg() in a loop.
> 
> Therefore, rework the code to avoid holding the heap_lock and use
> test_and_clear_bit() instead.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
diff mbox series

Patch

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 68e47d963842..70146a00ec8b 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1038,16 +1038,12 @@  static struct page_info *alloc_heap_pages(
     {
         for ( i = 0; i < (1U << order); i++ )
         {
-            if ( test_bit(_PGC_need_scrub, &pg[i].count_info) )
+            if ( test_and_clear_bit(_PGC_need_scrub, &pg[i].count_info) )
             {
                 if ( !(memflags & MEMF_no_scrub) )
                     scrub_one_page(&pg[i]);
 
                 dirty_cnt++;
-
-                spin_lock(&heap_lock);
-                pg[i].count_info &= ~PGC_need_scrub;
-                spin_unlock(&heap_lock);
             }
             else if ( !(memflags & MEMF_no_scrub) )
                 check_one_page(&pg[i]);