diff mbox series

migration: not require length align when choose fast dirty sync path

Message ID 20200306034109.19992-1-zhukeqian1@huawei.com (mailing list archive)
State New, archived
Headers show
Series migration: not require length align when choose fast dirty sync path | expand

Commit Message

zhukeqian March 6, 2020, 3:41 a.m. UTC
In aa777e297c840, ramblock length is required to align word pages
when we choose the fast dirty sync path. The reason is that "If the
Ramblock is less than 64 pages in length that long can contain bits
representing two different RAMBlocks, but the code will update the
bmap belinging to the 1st RAMBlock only while having updated the total
dirty page count for both."

This is right before 801110ab22be1ef2, which align ram_addr_t allocation
on long boundaries. So currently we wont "updated the total dirty page
count for both".

Remove the alignment constraint of length and we can always use fast
dirty sync path.

Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
---
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: qemu-devel@nongnu.org
---
 include/exec/ram_addr.h | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)
diff mbox series

Patch

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 5e59a3d8d7..40fd89e1cd 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -445,15 +445,13 @@  uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
                                                ram_addr_t length,
                                                uint64_t *real_dirty_pages)
 {
-    ram_addr_t addr;
-    unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS);
+    ram_addr_t start_global = start + rb->offset;
+    unsigned long word = BIT_WORD(start_global >> TARGET_PAGE_BITS);
     uint64_t num_dirty = 0;
     unsigned long *dest = rb->bmap;
 
-    /* start address and length is aligned at the start of a word? */
-    if (((word * BITS_PER_LONG) << TARGET_PAGE_BITS) ==
-         (start + rb->offset) &&
-        !(length & ((BITS_PER_LONG << TARGET_PAGE_BITS) - 1))) {
+    /* start address is aligned at the start of a word? */
+    if (((word * BITS_PER_LONG) << TARGET_PAGE_BITS) == start_global) {
         int k;
         int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS);
         unsigned long * const *src;
@@ -495,11 +493,10 @@  uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
             memory_region_clear_dirty_bitmap(rb->mr, start, length);
         }
     } else {
-        ram_addr_t offset = rb->offset;
-
+        ram_addr_t addr;
         for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) {
             if (cpu_physical_memory_test_and_clear_dirty(
-                        start + addr + offset,
+                        start_global + addr,
                         TARGET_PAGE_SIZE,
                         DIRTY_MEMORY_MIGRATION)) {
                 *real_dirty_pages += 1;