Message ID | 20240809194335.1726916-5-seanjc@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/mmu: Allow yielding on mmu_notifier zap | expand |
On Fri, Aug 9, 2024 at 12:43 PM Sean Christopherson <seanjc@google.com> wrote: > > Create mmu_stress_tests's VM with the correct number of extra pages needed > to map all of memory in the guest. The bug hasn't been noticed before as > the test currently runs only on x86, which maps guest memory with 1GiB > pages, i.e. doesn't need much memory in the guest for page tables. > > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > tools/testing/selftests/kvm/mmu_stress_test.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c > index 847da23ec1b1..5467b12f5903 100644 > --- a/tools/testing/selftests/kvm/mmu_stress_test.c > +++ b/tools/testing/selftests/kvm/mmu_stress_test.c > @@ -209,7 +209,13 @@ int main(int argc, char *argv[]) > vcpus = malloc(nr_vcpus * sizeof(*vcpus)); > TEST_ASSERT(vcpus, "Failed to allocate vCPU array"); > > - vm = vm_create_with_vcpus(nr_vcpus, guest_code, vcpus); > + vm = __vm_create_with_vcpus(VM_SHAPE_DEFAULT, nr_vcpus, > +#ifdef __x86_64__ > + max_mem / SZ_1G, > +#else > + max_mem / vm_guest_mode_params[VM_MODE_DEFAULT].page_size, > +#endif > + guest_code, vcpus); Hmm... I'm trying to square this change with the logic in vm_nr_pages_required(). That logic seems to be doing what you want (though it always assumes small mappings IIUC). So it seems like there's something else that's not being accounted for? (Also without the extra pages, how does this test actually fail?)
On Thu, Sep 05, 2024, James Houghton wrote: > On Fri, Aug 9, 2024 at 12:43 PM Sean Christopherson <seanjc@google.com> wrote: > > > > Create mmu_stress_tests's VM with the correct number of extra pages needed > > to map all of memory in the guest. The bug hasn't been noticed before as > > the test currently runs only on x86, which maps guest memory with 1GiB > > pages, i.e. doesn't need much memory in the guest for page tables. > > > > Signed-off-by: Sean Christopherson <seanjc@google.com> > > --- > > tools/testing/selftests/kvm/mmu_stress_test.c | 8 +++++++- > > 1 file changed, 7 insertions(+), 1 deletion(-) > > > > diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c > > index 847da23ec1b1..5467b12f5903 100644 > > --- a/tools/testing/selftests/kvm/mmu_stress_test.c > > +++ b/tools/testing/selftests/kvm/mmu_stress_test.c > > @@ -209,7 +209,13 @@ int main(int argc, char *argv[]) > > vcpus = malloc(nr_vcpus * sizeof(*vcpus)); > > TEST_ASSERT(vcpus, "Failed to allocate vCPU array"); > > > > - vm = vm_create_with_vcpus(nr_vcpus, guest_code, vcpus); > > + vm = __vm_create_with_vcpus(VM_SHAPE_DEFAULT, nr_vcpus, > > +#ifdef __x86_64__ > > + max_mem / SZ_1G, > > +#else > > + max_mem / vm_guest_mode_params[VM_MODE_DEFAULT].page_size, > > +#endif > > + guest_code, vcpus); > > Hmm... I'm trying to square this change with the logic in > vm_nr_pages_required(). vm_nr_pages_required() mostly operates on the number of pages that are needed to setup the VM, e.g. for vCPU stacks. The one calculation that guesstimates the number of bytes needed, ucall_nr_pages_required(), does the same thing this code does: divide the number of total bytes by bytes-per-page. > That logic seems to be doing what you want (though it always assumes small > mappings IIUC). Ya, AFAIK, x86 is the only architecture that let's test map huge pages on the guest side. > So it seems like there's something else that's not being accounted for? I don't think so? vm_nr_pages_required() uses @nr_pages to determine how many page table pages will be needed in the guest, and then adds that many non-huge pages worth of bytes to the size of memslot 0. > (Also without the extra pages, how does this test actually fail?) Guest memory allocation failure when trying to create the guest mappings.
On Thu, Sep 5, 2024 at 9:42 PM Sean Christopherson <seanjc@google.com> wrote: > > On Thu, Sep 05, 2024, James Houghton wrote: > > On Fri, Aug 9, 2024 at 12:43 PM Sean Christopherson <seanjc@google.com> wrote: > > > > > > Create mmu_stress_tests's VM with the correct number of extra pages needed > > > to map all of memory in the guest. The bug hasn't been noticed before as > > > the test currently runs only on x86, which maps guest memory with 1GiB > > > pages, i.e. doesn't need much memory in the guest for page tables. > > > > > > Signed-off-by: Sean Christopherson <seanjc@google.com> > > > --- > > > tools/testing/selftests/kvm/mmu_stress_test.c | 8 +++++++- > > > 1 file changed, 7 insertions(+), 1 deletion(-) > > > > > > diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c > > > index 847da23ec1b1..5467b12f5903 100644 > > > --- a/tools/testing/selftests/kvm/mmu_stress_test.c > > > +++ b/tools/testing/selftests/kvm/mmu_stress_test.c > > > @@ -209,7 +209,13 @@ int main(int argc, char *argv[]) > > > vcpus = malloc(nr_vcpus * sizeof(*vcpus)); > > > TEST_ASSERT(vcpus, "Failed to allocate vCPU array"); > > > > > > - vm = vm_create_with_vcpus(nr_vcpus, guest_code, vcpus); > > > + vm = __vm_create_with_vcpus(VM_SHAPE_DEFAULT, nr_vcpus, > > > +#ifdef __x86_64__ > > > + max_mem / SZ_1G, > > > +#else > > > + max_mem / vm_guest_mode_params[VM_MODE_DEFAULT].page_size, > > > +#endif > > > + guest_code, vcpus); > > > > Hmm... I'm trying to square this change with the logic in > > vm_nr_pages_required(). > > vm_nr_pages_required() mostly operates on the number of pages that are needed to > setup the VM, e.g. for vCPU stacks. The one calculation that guesstimates the > number of bytes needed, ucall_nr_pages_required(), does the same thing this code > does: divide the number of total bytes by bytes-per-page. Oh, yes, you're right. It's only accounting for the page tables for the 512 pages for memslot 0. Sorry for the noise. Feel free to add: Reviewed-by: James Houghton <jthoughton@google.com>
diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c index 847da23ec1b1..5467b12f5903 100644 --- a/tools/testing/selftests/kvm/mmu_stress_test.c +++ b/tools/testing/selftests/kvm/mmu_stress_test.c @@ -209,7 +209,13 @@ int main(int argc, char *argv[]) vcpus = malloc(nr_vcpus * sizeof(*vcpus)); TEST_ASSERT(vcpus, "Failed to allocate vCPU array"); - vm = vm_create_with_vcpus(nr_vcpus, guest_code, vcpus); + vm = __vm_create_with_vcpus(VM_SHAPE_DEFAULT, nr_vcpus, +#ifdef __x86_64__ + max_mem / SZ_1G, +#else + max_mem / vm_guest_mode_params[VM_MODE_DEFAULT].page_size, +#endif + guest_code, vcpus); max_gpa = vm->max_gfn << vm->page_shift; TEST_ASSERT(max_gpa > (4 * slot_size), "MAXPHYADDR <4gb ");
Create mmu_stress_tests's VM with the correct number of extra pages needed to map all of memory in the guest. The bug hasn't been noticed before as the test currently runs only on x86, which maps guest memory with 1GiB pages, i.e. doesn't need much memory in the guest for page tables. Signed-off-by: Sean Christopherson <seanjc@google.com> --- tools/testing/selftests/kvm/mmu_stress_test.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)