diff mbox series

[XEN] Check zone before merging adjacent blocks in heap

Message ID 20200204143010.5117-1-stewart.hildebrand@dornerworks.com (mailing list archive)
State Superseded
Headers show
Series [XEN] Check zone before merging adjacent blocks in heap | expand

Commit Message

Stewart Hildebrand Feb. 4, 2020, 2:30 p.m. UTC
From: Jeff Kubascik <jeff.kubascik@dornerworks.com>

The Xen heap is split up into nodes and zones. Each node + zone is
managed as a separate pool of memory.

When returning pages to the heap, free_heap_pages will check adjacent
blocks to see if they can be combined into a larger block. However, the
zone of the adjacent block is not checked. This results in blocks that
migrate from one zone to another.

When a block migrates to the adjacent zone, the avail counters for the
old and new node + zone is not updated accordingly. The avail counter
is used when allocating pages to determine whether to skip over a zone.
With this behavior, it is possible for free pages to collect in a zone
with the avail counter smaller than the actual page count, resulting
in free pages that are not allocable.

This commit adds a check to compare the adjacent block's zone with the
current zone before merging them.

Signed-off-by: Jeff Kubascik <jeff.kubascik@dornerworks.com>
---

Since this topic came up again, I figure it makes sense to resend it as
a real patch using git send-email rather than in reply to an existing
email.

---
 xen/common/page_alloc.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Stewart Hildebrand Feb. 4, 2020, 2:35 p.m. UTC | #1
On Tuesday, February 4, 2020 9:30 AM, Stewart Hildebrand wrote:
>diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>index 97902d42c1..7d39dd5be0 100644
>--- a/xen/common/page_alloc.c
>+++ b/xen/common/page_alloc.c
>@@ -1462,6 +1462,7 @@ static void free_heap_pages(
>             if ( !mfn_valid(page_to_mfn(predecessor)) ||
>                  !page_state_is(predecessor, free) ||
>                  (PFN_ORDER(predecessor) != order) ||
>+                 (page_to_zone(pg-mask) != zone) ||

It seems it would be more consistent with the surrounding code we did s/pg-mask/predecessor/

>                  (phys_to_nid(page_to_maddr(predecessor)) != node) )
>                 break;
>
>@@ -1485,6 +1486,7 @@ static void free_heap_pages(
>             if ( !mfn_valid(page_to_mfn(successor)) ||
>                  !page_state_is(successor, free) ||
>                  (PFN_ORDER(successor) != order) ||
>+                 (page_to_zone(pg+mask) != zone) ||

Similarly, s/pg+mask/successor/

>                  (phys_to_nid(page_to_maddr(successor)) != node) )
>                 break;
>
>--
>2.25.0
>
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@lists.xenproject.org
>https://lists.xenproject.org/mailman/listinfo/xen-devel
diff mbox series

Patch

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 97902d42c1..7d39dd5be0 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1462,6 +1462,7 @@  static void free_heap_pages(
             if ( !mfn_valid(page_to_mfn(predecessor)) ||
                  !page_state_is(predecessor, free) ||
                  (PFN_ORDER(predecessor) != order) ||
+                 (page_to_zone(pg-mask) != zone) ||
                  (phys_to_nid(page_to_maddr(predecessor)) != node) )
                 break;
 
@@ -1485,6 +1486,7 @@  static void free_heap_pages(
             if ( !mfn_valid(page_to_mfn(successor)) ||
                  !page_state_is(successor, free) ||
                  (PFN_ORDER(successor) != order) ||
+                 (page_to_zone(pg+mask) != zone) ||
                  (phys_to_nid(page_to_maddr(successor)) != node) )
                 break;