Message ID | 20210506184241.618958-6-bgardon@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Lazily allocate memslot rmaps | expand |
On Thu, May 6, 2021 at 11:43 AM Ben Gardon <bgardon@google.com> wrote: > > Add a field to control whether new memslots should have rmaps allocated > for them. As of this change, it's not safe to skip allocating rmaps, so > the field is always set to allocate rmaps. Future changes will make it > safe to operate without rmaps, using the TDP MMU. Then further changes > will allow the rmaps to be allocated lazily when needed for nested > oprtation. > > No functional change expected. > > Signed-off-by: Ben Gardon <bgardon@google.com> > --- > arch/x86/include/asm/kvm_host.h | 8 ++++++++ > arch/x86/kvm/mmu/mmu.c | 2 ++ > arch/x86/kvm/x86.c | 18 +++++++++++++----- > 3 files changed, 23 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index ad22d4839bcc..00065f9bbc5e 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -1122,6 +1122,12 @@ struct kvm_arch { > */ > spinlock_t tdp_mmu_pages_lock; > #endif /* CONFIG_X86_64 */ > + > + /* > + * If set, rmaps have been allocated for all memslots and should be > + * allocated for any newly created or modified memslots. > + */ > + bool memslots_have_rmaps; > }; > > struct kvm_vm_stat { > @@ -1853,4 +1859,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) > > int kvm_cpu_dirty_log_size(void); > > +inline bool kvm_memslots_have_rmaps(struct kvm *kvm); Woops, this shouldn't be marked inline as it creates build problems for the next patch with some configs. > + > #endif /* _ASM_X86_KVM_HOST_H */ > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 930ac8a7e7c9..8761b4925755 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -5469,6 +5469,8 @@ void kvm_mmu_init_vm(struct kvm *kvm) > > kvm_mmu_init_tdp_mmu(kvm); > > + kvm->arch.memslots_have_rmaps = true; > + > node->track_write = kvm_mmu_pte_write; > node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; > kvm_page_track_register_notifier(kvm, node); > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index fc32a7dbe4c4..d7a40ce342cc 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -10868,7 +10868,13 @@ static int alloc_memslot_rmap(struct kvm_memory_slot *slot, > return -ENOMEM; > } > > -static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, > +bool kvm_memslots_have_rmaps(struct kvm *kvm) > +{ > + return kvm->arch.memslots_have_rmaps; > +} > + > +static int kvm_alloc_memslot_metadata(struct kvm *kvm, > + struct kvm_memory_slot *slot, > unsigned long npages) > { > int i; > @@ -10881,9 +10887,11 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, > */ > memset(&slot->arch, 0, sizeof(slot->arch)); > > - r = alloc_memslot_rmap(slot, npages); > - if (r) > - return r; > + if (kvm_memslots_have_rmaps(kvm)) { > + r = alloc_memslot_rmap(slot, npages); > + if (r) > + return r; > + } > > for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { > struct kvm_lpage_info *linfo; > @@ -10954,7 +10962,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, > enum kvm_mr_change change) > { > if (change == KVM_MR_CREATE || change == KVM_MR_MOVE) > - return kvm_alloc_memslot_metadata(memslot, > + return kvm_alloc_memslot_metadata(kvm, memslot, > mem->memory_size >> PAGE_SHIFT); > return 0; > } > -- > 2.31.1.607.g51e8a6a459-goog >
On 07.05.21 01:44, Ben Gardon wrote: > On Thu, May 6, 2021 at 11:43 AM Ben Gardon <bgardon@google.com> wrote: >> >> Add a field to control whether new memslots should have rmaps allocated >> for them. As of this change, it's not safe to skip allocating rmaps, so >> the field is always set to allocate rmaps. Future changes will make it >> safe to operate without rmaps, using the TDP MMU. Then further changes >> will allow the rmaps to be allocated lazily when needed for nested >> oprtation. >> >> No functional change expected. >> >> Signed-off-by: Ben Gardon <bgardon@google.com> >> --- >> arch/x86/include/asm/kvm_host.h | 8 ++++++++ >> arch/x86/kvm/mmu/mmu.c | 2 ++ >> arch/x86/kvm/x86.c | 18 +++++++++++++----- >> 3 files changed, 23 insertions(+), 5 deletions(-) >> >> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h >> index ad22d4839bcc..00065f9bbc5e 100644 >> --- a/arch/x86/include/asm/kvm_host.h >> +++ b/arch/x86/include/asm/kvm_host.h >> @@ -1122,6 +1122,12 @@ struct kvm_arch { >> */ >> spinlock_t tdp_mmu_pages_lock; >> #endif /* CONFIG_X86_64 */ >> + >> + /* >> + * If set, rmaps have been allocated for all memslots and should be >> + * allocated for any newly created or modified memslots. >> + */ >> + bool memslots_have_rmaps; >> }; >> >> struct kvm_vm_stat { >> @@ -1853,4 +1859,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) >> >> int kvm_cpu_dirty_log_size(void); >> >> +inline bool kvm_memslots_have_rmaps(struct kvm *kvm); > > Woops, this shouldn't be marked inline as it creates build problems > for the next patch with some configs. With that fixed Reviewed-by: David Hildenbrand <david@redhat.com>
On 07/05/21 01:44, Ben Gardon wrote: >> struct kvm_vm_stat { >> @@ -1853,4 +1859,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) >> >> int kvm_cpu_dirty_log_size(void); >> >> +inline bool kvm_memslots_have_rmaps(struct kvm *kvm); > Woops, this shouldn't be marked inline as it creates build problems > for the next patch with some configs. > Possibly stupid (or at least lazy) question: why can't it be a "normal" static inline function? Paolo
On Fri, May 7, 2021 at 1:28 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > > On 07/05/21 01:44, Ben Gardon wrote: > >> struct kvm_vm_stat { > >> @@ -1853,4 +1859,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) > >> > >> int kvm_cpu_dirty_log_size(void); > >> > >> +inline bool kvm_memslots_have_rmaps(struct kvm *kvm); > > Woops, this shouldn't be marked inline as it creates build problems > > for the next patch with some configs. > > > > Possibly stupid (or at least lazy) question: why can't it be a "normal" > static inline function? That was my initial approach (hence the leftover inline) but I got some warnings about a forward declaration of struct kvm because arch/x86/include/asm/kvm_host.h doesn't include virt/kvm/kvm_host.h. Maybe there's a way to fix that, but I didn't want to mess with it. > > Paolo >
On 10/05/21 18:14, Ben Gardon wrote: >> Possibly stupid (or at least lazy) question: why can't it be a "normal" >> static inline function? > That was my initial approach (hence the leftover inline) but I got > some warnings about a forward declaration of struct kvm because > arch/x86/include/asm/kvm_host.h doesn't include virt/kvm/kvm_host.h. > Maybe there's a way to fix that, but I didn't want to mess with it. > Let's just use the field directly. Paolo
On Mon, May 10, 2021 at 9:33 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > > On 10/05/21 18:14, Ben Gardon wrote: > >> Possibly stupid (or at least lazy) question: why can't it be a "normal" > >> static inline function? > > That was my initial approach (hence the leftover inline) but I got > > some warnings about a forward declaration of struct kvm because > > arch/x86/include/asm/kvm_host.h doesn't include virt/kvm/kvm_host.h. > > Maybe there's a way to fix that, but I didn't want to mess with it. > > > > Let's just use the field directly. That works for me too. I moved to the wrapper because adding the smp_load_acquire and a comment explaining why we were doing that looked bloated and I thought it would be easier to document in one place, but it's not that much bloat, and having the subtleties documented directly in the function is probably clearer for readers anyway. > > Paolo >
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ad22d4839bcc..00065f9bbc5e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1122,6 +1122,12 @@ struct kvm_arch { */ spinlock_t tdp_mmu_pages_lock; #endif /* CONFIG_X86_64 */ + + /* + * If set, rmaps have been allocated for all memslots and should be + * allocated for any newly created or modified memslots. + */ + bool memslots_have_rmaps; }; struct kvm_vm_stat { @@ -1853,4 +1859,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) int kvm_cpu_dirty_log_size(void); +inline bool kvm_memslots_have_rmaps(struct kvm *kvm); + #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 930ac8a7e7c9..8761b4925755 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5469,6 +5469,8 @@ void kvm_mmu_init_vm(struct kvm *kvm) kvm_mmu_init_tdp_mmu(kvm); + kvm->arch.memslots_have_rmaps = true; + node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; kvm_page_track_register_notifier(kvm, node); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fc32a7dbe4c4..d7a40ce342cc 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10868,7 +10868,13 @@ static int alloc_memslot_rmap(struct kvm_memory_slot *slot, return -ENOMEM; } -static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, +bool kvm_memslots_have_rmaps(struct kvm *kvm) +{ + return kvm->arch.memslots_have_rmaps; +} + +static int kvm_alloc_memslot_metadata(struct kvm *kvm, + struct kvm_memory_slot *slot, unsigned long npages) { int i; @@ -10881,9 +10887,11 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, */ memset(&slot->arch, 0, sizeof(slot->arch)); - r = alloc_memslot_rmap(slot, npages); - if (r) - return r; + if (kvm_memslots_have_rmaps(kvm)) { + r = alloc_memslot_rmap(slot, npages); + if (r) + return r; + } for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { struct kvm_lpage_info *linfo; @@ -10954,7 +10962,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, enum kvm_mr_change change) { if (change == KVM_MR_CREATE || change == KVM_MR_MOVE) - return kvm_alloc_memslot_metadata(memslot, + return kvm_alloc_memslot_metadata(kvm, memslot, mem->memory_size >> PAGE_SHIFT); return 0; }
Add a field to control whether new memslots should have rmaps allocated for them. As of this change, it's not safe to skip allocating rmaps, so the field is always set to allocate rmaps. Future changes will make it safe to operate without rmaps, using the TDP MMU. Then further changes will allow the rmaps to be allocated lazily when needed for nested oprtation. No functional change expected. Signed-off-by: Ben Gardon <bgardon@google.com> --- arch/x86/include/asm/kvm_host.h | 8 ++++++++ arch/x86/kvm/mmu/mmu.c | 2 ++ arch/x86/kvm/x86.c | 18 +++++++++++++----- 3 files changed, 23 insertions(+), 5 deletions(-)