Message ID | 20231013150839.867164-3-william.roche@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Qemu crashes on VM migration after an handled memory error | expand |
On Fri, Oct 13, 2023 at 03:08:39PM +0000, “William Roche wrote: > diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c > index 5e95c496bb..e8db6380c1 100644 > --- a/target/arm/kvm64.c > +++ b/target/arm/kvm64.c > @@ -1158,7 +1158,6 @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr) > ram_addr = qemu_ram_addr_from_host(addr); > if (ram_addr != RAM_ADDR_INVALID && > kvm_physical_memory_addr_from_host(c->kvm_state, addr, &paddr)) { > - kvm_hwpoison_page_add(ram_addr); > /* > * If this is a BUS_MCEERR_AR, we know we have been called > * synchronously from the vCPU thread, so we can easily > @@ -1169,7 +1168,12 @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr) > * called synchronously from the vCPU thread, or a bit > * later from the main thread, so doing the injection of > * the error would be more complicated. > + * In this case, BUS_MCEERR_AO errors are unknown from the > + * guest, and we will prevent migration as long as this > + * poisoned page hasn't generated a BUS_MCEERR_AR error > + * that the guest takes into account. > */ > + kvm_hwpoison_page_add(ram_addr, (code == BUS_MCEERR_AR)); I'm curious why ARM doesn't forward this event to guest even if it's AO. X86 does it, and makes more sense to me. Not familiar with arm, do you know the reason? I think this patch needs review from ARM and/or KVM side. Do you want to have the 1st patch merged, or rather wait for the whole set? Another thing to mention: feel free to look at a recent addition of ioctl from userfault, where it can inject poisoned ptes: https://lore.kernel.org/r/20230707215540.2324998-1-axelrasmussen@google.com I'm wondering if that'll be helpful to qemu too, where we can migrate hwpoison_page_list and enforce the poisoning on dest. Then even for AO when accessed by guest it'll generated another MCE on dest. > if (code == BUS_MCEERR_AR) { > kvm_cpu_synchronize_state(c); > if (!acpi_ghes_record_errors(ACPI_HEST_SRC_ID_SEA, paddr)) {
On 10/16/23 18:48, Peter Xu wrote: > On Fri, Oct 13, 2023 at 03:08:39PM +0000, “William Roche wrote: >> diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c >> index 5e95c496bb..e8db6380c1 100644 >> --- a/target/arm/kvm64.c >> +++ b/target/arm/kvm64.c >> @@ -1158,7 +1158,6 @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr) >> ram_addr = qemu_ram_addr_from_host(addr); >> if (ram_addr != RAM_ADDR_INVALID && >> kvm_physical_memory_addr_from_host(c->kvm_state, addr, &paddr)) { >> - kvm_hwpoison_page_add(ram_addr); >> /* >> * If this is a BUS_MCEERR_AR, we know we have been called >> * synchronously from the vCPU thread, so we can easily >> @@ -1169,7 +1168,12 @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr) >> * called synchronously from the vCPU thread, or a bit >> * later from the main thread, so doing the injection of >> * the error would be more complicated. >> + * In this case, BUS_MCEERR_AO errors are unknown from the >> + * guest, and we will prevent migration as long as this >> + * poisoned page hasn't generated a BUS_MCEERR_AR error >> + * that the guest takes into account. >> */ >> + kvm_hwpoison_page_add(ram_addr, (code == BUS_MCEERR_AR)); > > I'm curious why ARM doesn't forward this event to guest even if it's AO. > X86 does it, and makes more sense to me. I agree that forwarding this error is the best option to implement. But an important note about this aspect is that only Intel architecture handles the AO error forwarding correctly; currently an AMD VM crashes when an AO error relay is attempted. That's why we've submitted the following kvm patch: https://lore.kernel.org/all/20230912211824.90952-3-john.allen@amd.com/ Among other AMD enhancements to better deal with MCE relay. > Not familiar with arm, do you > know the reason? I can't answer this question as I don't know anything about the specific 'complications' mentioned in the comment above. Maybe something around the injection through ACPI GHES and its interrupt mechanism ?? But note also that ignoring AO errors is just a question of relying on the Hypervisor kernel to generate an AR error when the asynchronously poisoned page is touched later. Which can be acceptable -- when the system guaranties the AR fault on the page. > > I think this patch needs review from ARM and/or KVM side. Do you want to > have the 1st patch merged, or rather wait for the whole set? I think that integrating the first patch alone is not an option as we would introduce the silent data corruption possibility I described. It would be better to integrate the two of them as a whole set. But the use of the kernel feature you indicated me can change all of that ! > > Another thing to mention: feel free to look at a recent addition of ioctl > from userfault, where it can inject poisoned ptes: > > https://lore.kernel.org/r/20230707215540.2324998-1-axelrasmussen@google.com > > I'm wondering if that'll be helpful to qemu too, where we can migrate > hwpoison_page_list and enforce the poisoning on dest. Then even for AO > when accessed by guest it'll generated another MCE on dest. I could be missing something, but Yes, this is exactly how I understand this kernel feature use case with its description in: https://lore.kernel.org/all/20230707215540.2324998-5-axelrasmussen@google.com/ vvvvvv So the basic way to use this new feature is: - On the new host, the guest's memory is registered with userfaultfd, in either MISSING or MINOR mode (doesn't really matter for this purpose). - On any first access, we get a userfaultfd event. At this point we can communicate with the old host to find out if the page was poisoned. - If so, we can respond with a UFFDIO_POISON - this places a swap marker so any future accesses will SIGBUS. Because the pte is now "present", future accesses won't generate more userfaultfd events, they'll just SIGBUS directly. ^^^^^^ Thank you for letting me know about this kernel functionality. I need to take some time to investigate it, to see how I could use it. The solution I'm suggesting here doesn't cover as many cases as the UFFDIO_POISON use could help to implement. But it gives us a possibility to live migrate VMs that already experienced memory errors, trusting the VM kernel to correctly deal with these past errors. AFAIK, currently, a standard qemu VM that has experienced a memory error can't be live migrated at all. Please correct me if I'm wrong. Thanks again.
On Tue, Oct 17, 2023 at 02:38:48AM +0200, William Roche wrote: > On 10/16/23 18:48, Peter Xu wrote: > > On Fri, Oct 13, 2023 at 03:08:39PM +0000, “William Roche wrote: > > > diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c > > > index 5e95c496bb..e8db6380c1 100644 > > > --- a/target/arm/kvm64.c > > > +++ b/target/arm/kvm64.c > > > @@ -1158,7 +1158,6 @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr) > > > ram_addr = qemu_ram_addr_from_host(addr); > > > if (ram_addr != RAM_ADDR_INVALID && > > > kvm_physical_memory_addr_from_host(c->kvm_state, addr, &paddr)) { > > > - kvm_hwpoison_page_add(ram_addr); > > > /* > > > * If this is a BUS_MCEERR_AR, we know we have been called > > > * synchronously from the vCPU thread, so we can easily > > > @@ -1169,7 +1168,12 @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr) > > > * called synchronously from the vCPU thread, or a bit > > > * later from the main thread, so doing the injection of > > > * the error would be more complicated. > > > + * In this case, BUS_MCEERR_AO errors are unknown from the > > > + * guest, and we will prevent migration as long as this > > > + * poisoned page hasn't generated a BUS_MCEERR_AR error > > > + * that the guest takes into account. > > > */ > > > + kvm_hwpoison_page_add(ram_addr, (code == BUS_MCEERR_AR)); > > > > I'm curious why ARM doesn't forward this event to guest even if it's AO. > > X86 does it, and makes more sense to me. > > I agree that forwarding this error is the best option to implement. > But an important note about this aspect is that only Intel architecture > handles the AO error forwarding correctly; currently an AMD VM crashes > when an AO error relay is attempted. > > That's why we've submitted the following kvm patch: > https://lore.kernel.org/all/20230912211824.90952-3-john.allen@amd.com/ > Among other AMD enhancements to better deal with MCE relay. I see. > > > > Not familiar with arm, do you > > know the reason? > > I can't answer this question as I don't know anything about the specific > 'complications' mentioned in the comment above. Maybe something around > the injection through ACPI GHES and its interrupt mechanism ?? > But note also that ignoring AO errors is just a question of relying on > the Hypervisor kernel to generate an AR error when the asynchronously > poisoned page is touched later. Which can be acceptable -- when the > system guaranties the AR fault on the page. > > > > > I think this patch needs review from ARM and/or KVM side. Do you want to > > have the 1st patch merged, or rather wait for the whole set? > > I think that integrating the first patch alone is not an option > as we would introduce the silent data corruption possibility I > described. I asked because I think patch 1 itself is still an improvement, which avoids src VM from crashing when hitting poisoned pages. Especially IIUC on some arch (Intel?) it's a complete fix. But for sure we can keep them as a whole series if you want, but then it'll be good you add some more reviewers; at least some ARM/AMD developers, perhaps. > It would be better to integrate the two of them as a whole > set. But the use of the kernel feature you indicated me can change all > of that ! > > > > > Another thing to mention: feel free to look at a recent addition of ioctl > > from userfault, where it can inject poisoned ptes: > > > > https://lore.kernel.org/r/20230707215540.2324998-1-axelrasmussen@google.com > > > > I'm wondering if that'll be helpful to qemu too, where we can migrate > > hwpoison_page_list and enforce the poisoning on dest. Then even for AO > > when accessed by guest it'll generated another MCE on dest. > > I could be missing something, but Yes, this is exactly how I understand > this kernel feature use case with its description in: > https://lore.kernel.org/all/20230707215540.2324998-5-axelrasmussen@google.com/ > > vvvvvv > So the basic way to use this new feature is: > > - On the new host, the guest's memory is registered with userfaultfd, in > either MISSING or MINOR mode (doesn't really matter for this purpose). > - On any first access, we get a userfaultfd event. At this point we can > communicate with the old host to find out if the page was poisoned. > - If so, we can respond with a UFFDIO_POISON - this places a swap marker > so any future accesses will SIGBUS. Because the pte is now "present", > future accesses won't generate more userfaultfd events, they'll just > SIGBUS directly. > ^^^^^^ > > Thank you for letting me know about this kernel functionality. > > I need to take some time to investigate it, to see how I could use it. One more hint, please double check though: in QEMU's use case (e.g. precopy only, while not using postcopy) I think you may even be able to install the poisoned pte without MISSING (or any other uffd) mode registered. You can try creating one uffd descriptor (which will bind the desc with the current mm context; in this case we need it to happen only on dest qemu), then try injecting poison ptes anywhere in the guest address ranges. > > The solution I'm suggesting here doesn't cover as many cases as the > UFFDIO_POISON use could help to implement. > But it gives us a possibility to live migrate VMs that already > experienced memory errors, trusting the VM kernel to correctly deal with > these past errors. > > AFAIK, currently, a standard qemu VM that has experienced a memory error > can't be live migrated at all. I suppose here you meant AO errors only. IIUC the major issue regarding migration is AO errors will become ARs on src qemu when vcpu accessed, which means AOs are all fine if not forwarded to guest. However after migration that is not guaranteed. Poisoned ptes properly installed on dest basically grants QEMU the ability to "migrate a poisoned page", meanwhile without really wasting a physical page on dest, making sure those AO error addrs keep generating ARs even after migration. It seems the 1st patch is still needed even in this case? Thanks,
On 10/17/23 17:13, Peter Xu wrote: > On Tue, Oct 17, 2023 at 02:38:48AM +0200, William Roche wrote: >> On 10/16/23 18:48, Peter Xu wrote: >>> On Fri, Oct 13, 2023 at 03:08:39PM +0000, “William Roche wrote: >>>> diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c >>>> index 5e95c496bb..e8db6380c1 100644 >>>> --- a/target/arm/kvm64.c >>>> +++ b/target/arm/kvm64.c >>>> @@ -1158,7 +1158,6 @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr) >>>> ram_addr = qemu_ram_addr_from_host(addr); >>>> if (ram_addr != RAM_ADDR_INVALID && >>>> kvm_physical_memory_addr_from_host(c->kvm_state, addr, &paddr)) { >>>> - kvm_hwpoison_page_add(ram_addr); >>>> /* >>>> * If this is a BUS_MCEERR_AR, we know we have been called >>>> * synchronously from the vCPU thread, so we can easily >>>> @@ -1169,7 +1168,12 @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr) >>>> * called synchronously from the vCPU thread, or a bit >>>> * later from the main thread, so doing the injection of >>>> * the error would be more complicated. >>>> + * In this case, BUS_MCEERR_AO errors are unknown from the >>>> + * guest, and we will prevent migration as long as this >>>> + * poisoned page hasn't generated a BUS_MCEERR_AR error >>>> + * that the guest takes into account. >>>> */ >>>> + kvm_hwpoison_page_add(ram_addr, (code == BUS_MCEERR_AR)); >>> I'm curious why ARM doesn't forward this event to guest even if it's AO. >>> X86 does it, and makes more sense to me. >> I agree that forwarding this error is the best option to implement. >> But an important note about this aspect is that only Intel architecture >> handles the AO error forwarding correctly; currently an AMD VM crashes >> when an AO error relay is attempted. >> >> That's why we've submitted the following kvm patch: >> https://lore.kernel.org/all/20230912211824.90952-3-john.allen@amd.com/ >> Among other AMD enhancements to better deal with MCE relay. > I see. > >> >>> Not familiar with arm, do you >>> know the reason? >> I can't answer this question as I don't know anything about the specific >> 'complications' mentioned in the comment above. Maybe something around >> the injection through ACPI GHES and its interrupt mechanism ?? >> But note also that ignoring AO errors is just a question of relying on >> the Hypervisor kernel to generate an AR error when the asynchronously >> poisoned page is touched later. Which can be acceptable -- when the >> system guaranties the AR fault on the page. >> >>> I think this patch needs review from ARM and/or KVM side. Do you want to >>> have the 1st patch merged, or rather wait for the whole set? >> I think that integrating the first patch alone is not an option >> as we would introduce the silent data corruption possibility I >> described. > I asked because I think patch 1 itself is still an improvement, which > avoids src VM from crashing when hitting poisoned pages. Especially IIUC > on some arch (Intel?) it's a complete fix. Yes, this is almost true: According to me this fix would be a transitional solution - a small change of the code to allow a VM live migration after a memory error. This change would be only needed on the source machine, and no necessary change on the destination machine. But let me just repeat that this fix relies on trusting the VM kernel to correctly deal with memory errors it knows about to avoid a memory corruption! Note also that large pages are taken into account too for our live migration, but the poisoning of a qemu large page requires more work especially for VM using standard 4k pages on top of these qemu large pages -- and this is a completely different issue. I'm mentioning this aspect here because even on Intel platforms, underlying large pages poisoning needs to be reported better to the running VM as a large section of its memory is gone (not just a single head 4k page), and adding live migration to this problem will not make things any better... > But for sure we can keep them as a whole series if you want, but then it'll > be good you add some more reviewers; at least some ARM/AMD developers, > perhaps. I'll add qemu-arm@nongnu.org to the CC list for the updated version I'm going to send. Giving a word about the ARM specificity of the second patch. >> It would be better to integrate the two of them as a whole >> set. But the use of the kernel feature you indicated me can change all >> of that ! >> >>> Another thing to mention: feel free to look at a recent addition of ioctl >>> from userfault, where it can inject poisoned ptes: >>> >>> https://lore.kernel.org/r/20230707215540.2324998-1-axelrasmussen@google.com >>> >>> I'm wondering if that'll be helpful to qemu too, where we can migrate >>> hwpoison_page_list and enforce the poisoning on dest. Then even for AO >>> when accessed by guest it'll generated another MCE on dest. >> I could be missing something, but Yes, this is exactly how I understand >> this kernel feature use case with its description in: >> https://lore.kernel.org/all/20230707215540.2324998-5-axelrasmussen@google.com/ >> >> vvvvvv >> So the basic way to use this new feature is: >> >> - On the new host, the guest's memory is registered with userfaultfd, in >> either MISSING or MINOR mode (doesn't really matter for this purpose). >> - On any first access, we get a userfaultfd event. At this point we can >> communicate with the old host to find out if the page was poisoned. >> - If so, we can respond with a UFFDIO_POISON - this places a swap marker >> so any future accesses will SIGBUS. Because the pte is now "present", >> future accesses won't generate more userfaultfd events, they'll just >> SIGBUS directly. >> ^^^^^^ >> >> Thank you for letting me know about this kernel functionality. >> >> I need to take some time to investigate it, to see how I could use it. > One more hint, please double check though: in QEMU's use case (e.g. precopy > only, while not using postcopy) I think you may even be able to install the > poisoned pte without MISSING (or any other uffd) mode registered. > > You can try creating one uffd descriptor (which will bind the desc with the > current mm context; in this case we need it to happen only on dest qemu), > then try injecting poison ptes anywhere in the guest address ranges. I did that in a self content test program: memory allocation, UFFDIO_REGISTER and use of UFFDIO_POISON. The register mode has to be given but MISSING or WP both works. This gives the possibility to inject poison in a much easier and better way than using madvise(... MADV_HWPOISON, ...) for example. But it implies a lot of other changes: - The source has to flag the error pages to indicate a poison (new flag in the exchange protocole) - The destination has to be able to deal with the new protocole - The destination has to be able to mark the pages as poisoned (authorized to use userfaultfd) - So both source and destination have to be upgraded (of course qemu but also an appropriate kernel version providing UFFDIO_POISON on the destination) - we may need to be able to negotiate a fall back solution - an indication of the method to use could belong to the migration capabilities and parameters - etc... >> The solution I'm suggesting here doesn't cover as many cases as the >> UFFDIO_POISON use could help to implement. >> But it gives us a possibility to live migrate VMs that already >> experienced memory errors, trusting the VM kernel to correctly deal with >> these past errors. >> >> AFAIK, currently, a standard qemu VM that has experienced a memory error >> can't be live migrated at all. > I suppose here you meant AO errors only. No, if any of the memory used by a VM has been impacted by a memory error (either with BUS_MCEERR_AO or BUS_MCEERR_AR) this memory isn't accessible anymore, and the live migration (whatever mechanism used) can't read the content of the impacted location. So AFAIK any mechanism used currently doesn't work. When we have such an error, either the migration fails (like RDMA currently does) or it completely crashes qemu when the migration is attempted. > IIUC the major issue regarding migration is AO errors will become ARs on > src qemu when vcpu accessed, This is correct. > which means AOs are all fine if not forwarded > to guest. You are right in the case where the VM stays on the source machine. With my current proposed fix we don't forward poison to the destination machine, so the problem is not to be able to access the content of these Uncorrected Error memory locations -- which means that if this content is needed we have to inform the requester that the data is inaccessible -- that's what the poison is for, and we count on the running VM kernel to enforce the poisoning. And if the AO error hasn't been reported to the VM running Kernel, we either must forward the poison to the destination machine or prevent the live migration. (That's what the second patch does for the platform ignoring the AO errors - currently only ARM) > However after migration that is not guaranteed. Poisoned ptes > properly installed on dest basically grants QEMU the ability to "migrate a > poisoned page", meanwhile without really wasting a physical page on dest, > making sure those AO error addrs keep generating ARs even after migration. Absolutely, this is the huge advantage of such a solution. > It seems the 1st patch is still needed even in this case? If we can transfer a poison to the destination machine, there is no need for the 1st patch (transforming poisoned pages into zero pages). That's the reason why I do think that enhancing both the source qemu and the destination qemu to deal with poisoned pages is the real (long term) fix. In the meantime, we could use this current small set of 2 patches to avoid the qemu crashes on live migration after a memory fault. I hope this clarifies the situation, and the reason why I'd prefer the two patches to be integrated together. I've updated the code to the latest source tree (resolving conflicts with 8697eb857769 and 72a8192e225c) and I'm sending a v5 with this update. Adapting the commit message to reflect the new stack trace on crash. I also re-ran my migration tests, with and without compression, on ARM and x86 platforms. I hope this can help.
On Mon, Nov 06, 2023 at 10:38:14PM +0100, William Roche wrote: > Note also that large pages are taken into account too for our live > migration, but the poisoning of a qemu large page requires more work > especially for VM using standard 4k pages on top of these qemu large > pages -- and this is a completely different issue. I'm mentioning this > aspect here because even on Intel platforms, underlying large pages > poisoning needs to be reported better to the running VM as a large > section of its memory is gone (not just a single head 4k page), and > adding live migration to this problem will not make things any better... Good point.. Yes, huge poisoned pages seem all broken. > I did that in a self content test program: memory allocation, > UFFDIO_REGISTER and use of UFFDIO_POISON. The register mode has to be > given but MISSING or WP both works. This gives the possibility to inject > poison in a much easier and better way than using > madvise(... MADV_HWPOISON, ...) for example. Indeed, I should have left a comment if I noticed that when reviewing the POISON changes; I overlooked that find_dst_vma(), even named like that, will check the vma uffd context existed. Doesn't really be necessary to UFFDIO_POISON. I can consider proposing a patch to allow that, which should be trivial.. but it won't help with old kernels, so QEMU may still need to better always register to make it always work as long as UFFD_FEATURE_POISON reported.. sad. > > But it implies a lot of other changes: > - The source has to flag the error pages to indicate a poison > (new flag in the exchange protocole) > - The destination has to be able to deal with the new protocole IIUC these two can be simply implemented by migrating hwpoison_page_list over to dest. You need to have a compat bit for doing this, ignoring the list on old machine types, because old QEMUs will not recognize this vmsd. QEMU should even support migrating a list object in VMSD, feel free to have a look at VMSTATE_QLIST_V(). > - The destination has to be able to mark the pages as poisoned > (authorized to use userfaultfd) Note: userfaultfd is actually available without any privilege if to use UFFDIO_POISON only, as long as to open the uffd (either via syscall or /dev/userfaultfd) using UFFD_FLAG_USER_ONLY. A trick is we can register with UFFD_WP mode (not MISSING; because when a kernel accesses a missing page it'll cause SIGBUS then with USER_ONLY), then inject whatever POISON we want. As long as UFFDIO_WRITEPROTECT is not invoked, UFFD_WP does nothing (unlike MISSING). > - So both source and destination have to be upgraded (of course > qemu but also an appropriate kernel version providing > UFFDIO_POISON on the destination) True. Unfortunately this is not avoidable. > - we may need to be able to negotiate a fall back solution > - an indication of the method to use could belong to the > migration capabilities and parameters For above two points: it's a common issue with migration compatibility. As long as you can provide above VMSD to migrate hwpoison_page_list, marking all old QEMU machine types skipping that, then it should just work. You can have a closer look at anything in hw_compat_* as an example. > - etc... I think you did summarize mostly all the points I can think of; is there really anything more? :) It'll be great if you can, or plan to, fix that for good. Thanks,
On 11/8/23 22:45, Peter Xu wrote: > On Mon, Nov 06, 2023 at 10:38:14PM +0100, William Roche wrote: >> But it implies a lot of other changes: >> - The source has to flag the error pages to indicate a poison >> (new flag in the exchange protocole) >> - The destination has to be able to deal with the new protocole > IIUC these two can be simply implemented by migrating hwpoison_page_list > over to dest. You need to have a compat bit for doing this, ignoring the > list on old machine types, because old QEMUs will not recognize this vmsd. > > QEMU should even support migrating a list object in VMSD, feel free to have > a look at VMSTATE_QLIST_V(). This is another area that I'll need to learn about. >> - The destination has to be able to mark the pages as poisoned >> (authorized to use userfaultfd) > Note: userfaultfd is actually available without any privilege if to use > UFFDIO_POISON only, as long as to open the uffd (either via syscall or > /dev/userfaultfd) using UFFD_FLAG_USER_ONLY. > > A trick is we can register with UFFD_WP mode (not MISSING; because when a > kernel accesses a missing page it'll cause SIGBUS then with USER_ONLY), > then inject whatever POISON we want. As long as UFFDIO_WRITEPROTECT is not > invoked, UFFD_WP does nothing (unlike MISSING). > >> - So both source and destination have to be upgraded (of course >> qemu but also an appropriate kernel version providing >> UFFDIO_POISON on the destination) > True. Unfortunately this is not avoidable. > >> - we may need to be able to negotiate a fall back solution >> - an indication of the method to use could belong to the >> migration capabilities and parameters > For above two points: it's a common issue with migration compatibility. As > long as you can provide above VMSD to migrate hwpoison_page_list, marking > all old QEMU machine types skipping that, then it should just work. > > You can have a closer look at anything in hw_compat_* as an example. Yes, I'll do that. >> - etc... > I think you did summarize mostly all the points I can think of; is there > really anything more? :) Probably some work to select the poison migration method (allowing a migration transforming poison into zeros as a fall back method if the poison migration itself with UFFDIO_POISON can't be used, or not) for example. > It'll be great if you can, or plan to, fix that for good. Thanks for the offer ;) I'd really like to implement that, but I currently have another pressing issue to work on. I should be back on this topic within a few months. I'm now waiting for some feedback from the ARM architecture reviewer(s). Thanks a lot for all your suggestions.
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index 850577ea0e..2829b6372a 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -1133,8 +1133,17 @@ int kvm_vm_check_extension(KVMState *s, unsigned int extension) return ret; } +/* + * We track the poisoned pages to be able to: + * - replace them on VM reset + * - skip them when migrating + * - block a migration for a VM where a poisoned page is ignored + * as this VM kernel (not knowing about the error) could + * incorrectly access the page. + */ typedef struct HWPoisonPage { ram_addr_t ram_addr; + bool vm_known; QLIST_ENTRY(HWPoisonPage) list; } HWPoisonPage; @@ -1166,20 +1175,36 @@ bool kvm_hwpoisoned_page(RAMBlock *block, void *offset) return false; } -void kvm_hwpoison_page_add(ram_addr_t ram_addr) +void kvm_hwpoison_page_add(ram_addr_t ram_addr, bool known) { HWPoisonPage *page; QLIST_FOREACH(page, &hwpoison_page_list, list) { if (page->ram_addr == ram_addr) { + if (known && !page->vm_known) { + page->vm_known = true; + } return; } } page = g_new(HWPoisonPage, 1); page->ram_addr = ram_addr; + page->vm_known = known; QLIST_INSERT_HEAD(&hwpoison_page_list, page, list); } +bool kvm_hwpoisoned_unknown(void) +{ + HWPoisonPage *pg; + + QLIST_FOREACH(pg, &hwpoison_page_list, list) { + if (!pg->vm_known) { + return true; + } + } + return false; +} + static uint32_t adjust_ioeventfd_endianness(uint32_t val, uint32_t size) { #if HOST_BIG_ENDIAN != TARGET_BIG_ENDIAN diff --git a/accel/stubs/kvm-stub.c b/accel/stubs/kvm-stub.c index c0a31611df..c43de44263 100644 --- a/accel/stubs/kvm-stub.c +++ b/accel/stubs/kvm-stub.c @@ -138,3 +138,8 @@ bool kvm_hwpoisoned_page(RAMBlock *block, void *ram_addr) { return false; } + +bool kvm_hwpoisoned_unknown(void) +{ + return false; +} diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h index 858688227a..37c8316ce4 100644 --- a/include/sysemu/kvm.h +++ b/include/sysemu/kvm.h @@ -580,4 +580,10 @@ uint32_t kvm_dirty_ring_size(void); * false: page not yet poisoned */ bool kvm_hwpoisoned_page(RAMBlock *block, void *ram_addr); + +/** + * kvm_hwpoisoned_unknown - indicate if a qemu reported memory error + * is still unknown to (hasn't been injected into) the VM kernel. + */ +bool kvm_hwpoisoned_unknown(void); #endif diff --git a/include/sysemu/kvm_int.h b/include/sysemu/kvm_int.h index a5b9122cb8..2dfde40690 100644 --- a/include/sysemu/kvm_int.h +++ b/include/sysemu/kvm_int.h @@ -136,10 +136,11 @@ void kvm_set_max_memslot_size(hwaddr max_slot_size); * * Parameters: * @ram_addr: the address in the RAM for the poisoned page + * @known: indicate if the error is injected to the VM kernel * * Add a poisoned page to the list * * Return: None. */ -void kvm_hwpoison_page_add(ram_addr_t ram_addr); +void kvm_hwpoison_page_add(ram_addr_t ram_addr, bool known); #endif diff --git a/migration/migration.c b/migration/migration.c index 1c6c81ad49..27e9571aaf 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -66,6 +66,7 @@ #include "sysemu/qtest.h" #include "options.h" #include "sysemu/dirtylimit.h" +#include "sysemu/kvm.h" static NotifierList migration_state_notifiers = NOTIFIER_LIST_INITIALIZER(migration_state_notifiers); @@ -1646,6 +1647,11 @@ static bool migrate_prepare(MigrationState *s, bool blk, bool blk_inc, return false; } + if (kvm_hwpoisoned_unknown()) { + error_setg(errp, "Can't migrate this vm with ignored poisoned page"); + return false; + } + if (migration_is_blocked(errp)) { return false; } diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c index 5e95c496bb..e8db6380c1 100644 --- a/target/arm/kvm64.c +++ b/target/arm/kvm64.c @@ -1158,7 +1158,6 @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr) ram_addr = qemu_ram_addr_from_host(addr); if (ram_addr != RAM_ADDR_INVALID && kvm_physical_memory_addr_from_host(c->kvm_state, addr, &paddr)) { - kvm_hwpoison_page_add(ram_addr); /* * If this is a BUS_MCEERR_AR, we know we have been called * synchronously from the vCPU thread, so we can easily @@ -1169,7 +1168,12 @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr) * called synchronously from the vCPU thread, or a bit * later from the main thread, so doing the injection of * the error would be more complicated. + * In this case, BUS_MCEERR_AO errors are unknown from the + * guest, and we will prevent migration as long as this + * poisoned page hasn't generated a BUS_MCEERR_AR error + * that the guest takes into account. */ + kvm_hwpoison_page_add(ram_addr, (code == BUS_MCEERR_AR)); if (code == BUS_MCEERR_AR) { kvm_cpu_synchronize_state(c); if (!acpi_ghes_record_errors(ACPI_HEST_SRC_ID_SEA, paddr)) { diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c index f6c7f7e268..f9365b4457 100644 --- a/target/i386/kvm/kvm.c +++ b/target/i386/kvm/kvm.c @@ -649,7 +649,7 @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr) ram_addr = qemu_ram_addr_from_host(addr); if (ram_addr != RAM_ADDR_INVALID && kvm_physical_memory_addr_from_host(c->kvm_state, addr, &paddr)) { - kvm_hwpoison_page_add(ram_addr); + kvm_hwpoison_page_add(ram_addr, true); kvm_mce_inject(cpu, paddr, code); /*