Message ID | 20190104085405.40356-10-Tianyu.Lan@microsoft.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | X86/KVM/Hyper-V: Add HV ept tlb range list flush support in KVM | expand |
On 04/01/19 09:54, lantianyu1986@gmail.com wrote: > rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), > PT_PAGE_TABLE_LEVEL, slot); > - __rmap_write_protect(kvm, rmap_head, false); > + flush |= __rmap_write_protect(kvm, rmap_head, false); > > /* clear the first set bit */ > mask &= mask - 1; > } > + > + if (flush && kvm_available_flush_tlb_with_range()) { > + kvm_flush_remote_tlbs_with_address(kvm, > + slot->base_gfn + gfn_offset, > + hweight_long(mask)); Mask is zero here, so this probably won't work. In addition, I suspect calling the hypercall once for every 64 pages is not very efficient. Passing a flush list into kvm_mmu_write_protect_pt_masked, and flushing in kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word. I don't have any good ideas, except for moving the whole kvm_clear_dirty_log_protect loop into architecture-specific code (which is not the direction we want---architectures should share more code, not less). Paolo > + flush = false; > + } > +
On Tue, Jan 8, 2019 at 12:26 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > > On 04/01/19 09:54, lantianyu1986@gmail.com wrote: > > rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), > > PT_PAGE_TABLE_LEVEL, slot); > > - __rmap_write_protect(kvm, rmap_head, false); > > + flush |= __rmap_write_protect(kvm, rmap_head, false); > > > > /* clear the first set bit */ > > mask &= mask - 1; > > } > > + > > + if (flush && kvm_available_flush_tlb_with_range()) { > > + kvm_flush_remote_tlbs_with_address(kvm, > > + slot->base_gfn + gfn_offset, > > + hweight_long(mask)); > > Mask is zero here, so this probably won't work. > > In addition, I suspect calling the hypercall once for every 64 pages is > not very efficient. Passing a flush list into > kvm_mmu_write_protect_pt_masked, and flushing in > kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because > kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word. > Yes, this is not efficient. > I don't have any good ideas, except for moving the whole > kvm_clear_dirty_log_protect loop into architecture-specific code (which > is not the direction we want---architectures should share more code, not > less). kvm_vm_ioctl_clear_dirty_log/get_dirty_log() is to get/clear dirty log with memslot as unit. We may just flush tlbs of the affected memslot instead of entire page table's when range flush is available. > > Paolo > > > + flush = false; > > + } > > + > -- Best regards Tianyu Lan
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 9d8ee6ea02db..30ed7a79335b 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1624,20 +1624,30 @@ static bool __rmap_set_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head) * Used when we do not need to care about huge page mappings: e.g. during dirty * logging we do not have any such mappings. */ -static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, +static bool kvm_mmu_write_protect_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask) { struct kvm_rmap_head *rmap_head; + bool flush = false; while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PT_PAGE_TABLE_LEVEL, slot); - __rmap_write_protect(kvm, rmap_head, false); + flush |= __rmap_write_protect(kvm, rmap_head, false); /* clear the first set bit */ mask &= mask - 1; } + + if (flush && kvm_available_flush_tlb_with_range()) { + kvm_flush_remote_tlbs_with_address(kvm, + slot->base_gfn + gfn_offset, + hweight_long(mask)); + flush = false; + } + + return flush; } /** @@ -1683,13 +1693,14 @@ bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask) { - if (kvm_x86_ops->enable_log_dirty_pt_masked) + if (kvm_x86_ops->enable_log_dirty_pt_masked) { kvm_x86_ops->enable_log_dirty_pt_masked(kvm, slot, gfn_offset, mask); - else - kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); - - return true; + return true; + } else { + return kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, + mask); + } } /**