Message ID | 1422413668-3509-6-git-send-email-kai.huang@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
2015-01-28 10:54+0800, Kai Huang: > This patch adds new kvm_x86_ops dirty logging hooks to enable/disable dirty > logging for particular memory slot, and to flush potentially logged dirty GPAs > before reporting slot->dirty_bitmap to userspace. > > kvm x86 common code calls these hooks when they are available so PML logic can > be hidden to VMX specific. Other ARCHs won't be impacted as these hooks are NULL > for them. > > Signed-off-by: Kai Huang <kai.huang@linux.intel.com> > --- > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -802,6 +802,31 @@ struct kvm_x86_ops { > + > + /* > + * Arch-specific dirty logging hooks. These hooks are only supposed to > + * be valid if the specific arch has hardware-accelerated dirty logging > + * mechanism. Currently only for PML on VMX. > + * > + * - slot_enable_log_dirty: > + * called when enabling log dirty mode for the slot. (I guess that "log dirty mode" isn't the meaning that people will think after seeing 'log_dirty' ... I'd at least change 'log_dirty' to 'dirty_log' in these names.) > + * - slot_disable_log_dirty: > + * called when disabling log dirty mode for the slot. > + * also called when slot is created with log dirty disabled. > + * - flush_log_dirty: > + * called before reporting dirty_bitmap to userspace. > + * - enable_log_dirty_pt_masked: > + * called when reenabling log dirty for the GFNs in the mask after > + * corresponding bits are cleared in slot->dirty_bitmap. This name is very confusing ... I think we should hint that this is called after we learn that the page has been written to and would like to monitor it again. Using something like collected/refresh? (I'd have to do horrible things to come up with a good name, sorry.) > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -3780,6 +3780,12 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) > > mutex_lock(&kvm->slots_lock); > > + /* > + * Flush potentially hardware-cached dirty pages to dirty_bitmap. > + */ > + if (kvm_x86_ops->flush_log_dirty) > + kvm_x86_ops->flush_log_dirty(kvm); (Flushing would make more sense in kvm_get_dirty_log_protect().) > + > r = kvm_get_dirty_log_protect(kvm, log, &is_dirty); > > /* > @@ -7533,6 +7539,56 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, > return 0; > } > > +static void kvm_mmu_slot_apply_flags(struct kvm *kvm, > + struct kvm_memory_slot *new) > +{ > + /* Still write protect RO slot */ > + if (new->flags & KVM_MEM_READONLY) { > + kvm_mmu_slot_remove_write_access(kvm, new); We didn't write protect RO slots before, does this patch depend on it? > @@ -7562,16 +7618,15 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, > - if ((change != KVM_MR_DELETE) && (new->flags & KVM_MEM_LOG_DIRTY_PAGES)) > - kvm_mmu_slot_remove_write_access(kvm, new); > + if (change != KVM_MR_DELETE) > + kvm_mmu_slot_apply_flags(kvm, new); -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 02/03/2015 11:53 PM, Radim Kr?má? wrote: > 2015-01-28 10:54+0800, Kai Huang: >> This patch adds new kvm_x86_ops dirty logging hooks to enable/disable dirty >> logging for particular memory slot, and to flush potentially logged dirty GPAs >> before reporting slot->dirty_bitmap to userspace. >> >> kvm x86 common code calls these hooks when they are available so PML logic can >> be hidden to VMX specific. Other ARCHs won't be impacted as these hooks are NULL >> for them. >> >> Signed-off-by: Kai Huang <kai.huang@linux.intel.com> >> --- >> --- a/arch/x86/include/asm/kvm_host.h >> +++ b/arch/x86/include/asm/kvm_host.h >> @@ -802,6 +802,31 @@ struct kvm_x86_ops { >> + >> + /* >> + * Arch-specific dirty logging hooks. These hooks are only supposed to >> + * be valid if the specific arch has hardware-accelerated dirty logging >> + * mechanism. Currently only for PML on VMX. >> + * >> + * - slot_enable_log_dirty: >> + * called when enabling log dirty mode for the slot. > (I guess that "log dirty mode" isn't the meaning that people will think > after seeing 'log_dirty' ... > I'd at least change 'log_dirty' to 'dirty_log' in these names.) > >> + * - slot_disable_log_dirty: >> + * called when disabling log dirty mode for the slot. >> + * also called when slot is created with log dirty disabled. >> + * - flush_log_dirty: >> + * called before reporting dirty_bitmap to userspace. >> + * - enable_log_dirty_pt_masked: >> + * called when reenabling log dirty for the GFNs in the mask after >> + * corresponding bits are cleared in slot->dirty_bitmap. > This name is very confusing ... I think we should hint that this is > called after we learn that the page has been written to and would like > to monitor it again. > > Using something like collected/refresh? (I'd have to do horrible things > to come up with a good name, sorry.) > >> --- a/arch/x86/kvm/x86.c >> +++ b/arch/x86/kvm/x86.c >> @@ -3780,6 +3780,12 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) >> >> mutex_lock(&kvm->slots_lock); >> >> + /* >> + * Flush potentially hardware-cached dirty pages to dirty_bitmap. >> + */ >> + if (kvm_x86_ops->flush_log_dirty) >> + kvm_x86_ops->flush_log_dirty(kvm); > (Flushing would make more sense in kvm_get_dirty_log_protect().) > >> + >> r = kvm_get_dirty_log_protect(kvm, log, &is_dirty); >> >> /* >> @@ -7533,6 +7539,56 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, >> return 0; >> } >> >> +static void kvm_mmu_slot_apply_flags(struct kvm *kvm, >> + struct kvm_memory_slot *new) >> +{ >> + /* Still write protect RO slot */ >> + if (new->flags & KVM_MEM_READONLY) { >> + kvm_mmu_slot_remove_write_access(kvm, new); > We didn't write protect RO slots before, does this patch depend on it? No PML doesn't depend on it to work. It's suggested by Paolo. Thanks, -Kai > >> @@ -7562,16 +7618,15 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, >> - if ((change != KVM_MR_DELETE) && (new->flags & KVM_MEM_LOG_DIRTY_PAGES)) >> - kvm_mmu_slot_remove_write_access(kvm, new); >> + if (change != KVM_MR_DELETE) >> + kvm_mmu_slot_apply_flags(kvm, new); -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
2015-02-05 14:29+0800, Kai Huang: > >>+ /* Still write protect RO slot */ > >>+ if (new->flags & KVM_MEM_READONLY) { > >>+ kvm_mmu_slot_remove_write_access(kvm, new); > >We didn't write protect RO slots before, does this patch depend on it? > No PML doesn't depend on it to work. It's suggested by Paolo. Thanks, it would have deserved a separate patch, IMO. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 67a98d7..57916ec 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -802,6 +802,31 @@ struct kvm_x86_ops { int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr); void (*sched_in)(struct kvm_vcpu *kvm, int cpu); + + /* + * Arch-specific dirty logging hooks. These hooks are only supposed to + * be valid if the specific arch has hardware-accelerated dirty logging + * mechanism. Currently only for PML on VMX. + * + * - slot_enable_log_dirty: + * called when enabling log dirty mode for the slot. + * - slot_disable_log_dirty: + * called when disabling log dirty mode for the slot. + * also called when slot is created with log dirty disabled. + * - flush_log_dirty: + * called before reporting dirty_bitmap to userspace. + * - enable_log_dirty_pt_masked: + * called when reenabling log dirty for the GFNs in the mask after + * corresponding bits are cleared in slot->dirty_bitmap. + */ + void (*slot_enable_log_dirty)(struct kvm *kvm, + struct kvm_memory_slot *slot); + void (*slot_disable_log_dirty)(struct kvm *kvm, + struct kvm_memory_slot *slot); + void (*flush_log_dirty)(struct kvm *kvm); + void (*enable_log_dirty_pt_masked)(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t offset, unsigned long mask); }; struct kvm_arch_async_pf { diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 6c24af3..c5833ca 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1335,7 +1335,11 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask) { - kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); + if (kvm_x86_ops->enable_log_dirty_pt_masked) + kvm_x86_ops->enable_log_dirty_pt_masked(kvm, slot, gfn_offset, + mask); + else + kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); } static bool rmap_write_protect(struct kvm *kvm, u64 gfn) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3a7fcff..442ee7d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3780,6 +3780,12 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) mutex_lock(&kvm->slots_lock); + /* + * Flush potentially hardware-cached dirty pages to dirty_bitmap. + */ + if (kvm_x86_ops->flush_log_dirty) + kvm_x86_ops->flush_log_dirty(kvm); + r = kvm_get_dirty_log_protect(kvm, log, &is_dirty); /* @@ -7533,6 +7539,56 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, return 0; } +static void kvm_mmu_slot_apply_flags(struct kvm *kvm, + struct kvm_memory_slot *new) +{ + /* Still write protect RO slot */ + if (new->flags & KVM_MEM_READONLY) { + kvm_mmu_slot_remove_write_access(kvm, new); + return; + } + + /* + * Call kvm_x86_ops dirty logging hooks when they are valid. + * + * kvm_x86_ops->slot_disable_log_dirty is called when: + * + * - KVM_MR_CREATE with dirty logging is disabled + * - KVM_MR_FLAGS_ONLY with dirty logging is disabled in new flag + * + * The reason is, in case of PML, we need to set D-bit for any slots + * with dirty logging disabled in order to eliminate unnecessary GPA + * logging in PML buffer (and potential PML buffer full VMEXT). This + * guarantees leaving PML enabled during guest's lifetime won't have + * any additonal overhead from PML when guest is running with dirty + * logging disabled for memory slots. + * + * kvm_x86_ops->slot_enable_log_dirty is called when switching new slot + * to dirty logging mode. + * + * If kvm_x86_ops dirty logging hooks are invalid, use write protect. + * + * In case of write protect: + * + * Write protect all pages for dirty logging. + * + * All the sptes including the large sptes which point to this + * slot are set to readonly. We can not create any new large + * spte on this slot until the end of the logging. + * + * See the comments in fast_page_fault(). + */ + if (new->flags & KVM_MEM_LOG_DIRTY_PAGES) { + if (kvm_x86_ops->slot_enable_log_dirty) + kvm_x86_ops->slot_enable_log_dirty(kvm, new); + else + kvm_mmu_slot_remove_write_access(kvm, new); + } else { + if (kvm_x86_ops->slot_disable_log_dirty) + kvm_x86_ops->slot_disable_log_dirty(kvm, new); + } +} + void kvm_arch_commit_memory_region(struct kvm *kvm, struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, @@ -7562,16 +7618,15 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, new = id_to_memslot(kvm->memslots, mem->slot); /* - * Write protect all pages for dirty logging. + * Set up write protection and/or dirty logging for the new slot. * - * All the sptes including the large sptes which point to this - * slot are set to readonly. We can not create any new large - * spte on this slot until the end of the logging. - * - * See the comments in fast_page_fault(). + * For KVM_MR_DELETE and KVM_MR_MOVE, the shadow pages of old slot have + * been zapped so no dirty logging staff is needed for old slot. For + * KVM_MR_FLAGS_ONLY, the old slot is essentially the same one as the + * new and it's also covered when dealing with the new slot. */ - if ((change != KVM_MR_DELETE) && (new->flags & KVM_MEM_LOG_DIRTY_PAGES)) - kvm_mmu_slot_remove_write_access(kvm, new); + if (change != KVM_MR_DELETE) + kvm_mmu_slot_apply_flags(kvm, new); } void kvm_arch_flush_shadow_all(struct kvm *kvm)
This patch adds new kvm_x86_ops dirty logging hooks to enable/disable dirty logging for particular memory slot, and to flush potentially logged dirty GPAs before reporting slot->dirty_bitmap to userspace. kvm x86 common code calls these hooks when they are available so PML logic can be hidden to VMX specific. Other ARCHs won't be impacted as these hooks are NULL for them. Signed-off-by: Kai Huang <kai.huang@linux.intel.com> --- arch/x86/include/asm/kvm_host.h | 25 +++++++++++++++ arch/x86/kvm/mmu.c | 6 +++- arch/x86/kvm/x86.c | 71 ++++++++++++++++++++++++++++++++++++----- 3 files changed, 93 insertions(+), 9 deletions(-)