Message ID | 20200123180436.99487-10-bgardon@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Create a userfaultfd demand paging test | expand |
On 23/01/20 19:04, Ben Gardon wrote: > KVM creates internal memslots covering the region between 3G and 4G in > the guest physical address space, when the first vCPU is created. > Mapping this region before creation of the first vCPU causes vCPU > creation to fail. Prohibit tests from creating such a memslot and fail > with a helpful warning when they try to. > > Signed-off-by: Ben Gardon <bgardon@google.com> > --- The internal memslots are much higher than this (0xfffbc000 and 0xfee00000). I'm changing the patch to block 0xfe0000000 and above, otherwise it breaks vmx_dirty_log_test. Paolo
On Fri, Jan 24, 2020 at 12:58 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > > On 23/01/20 19:04, Ben Gardon wrote: > > KVM creates internal memslots covering the region between 3G and 4G in > > the guest physical address space, when the first vCPU is created. > > Mapping this region before creation of the first vCPU causes vCPU > > creation to fail. Prohibit tests from creating such a memslot and fail > > with a helpful warning when they try to. > > > > Signed-off-by: Ben Gardon <bgardon@google.com> > > --- > > The internal memslots are much higher than this (0xfffbc000 and > 0xfee00000). I'm changing the patch to block 0xfe0000000 and above, > otherwise it breaks vmx_dirty_log_test. Perhaps we're working in different units, but I believe paddrs 0xfffbc000 and 0xfee00000 are between 3GiB and 4GiB. "Proof by Python": >>> B=1 >>> KB=1024*B >>> MB=1024*KB >>> GB=1024*MB >>> hex(3*GB) '0xc0000000' >>> hex(4*GB) '0x100000000' >>> 3*GB == 3<<30 True >>> 0xfffbc000 > 3*GB True >>> 0xfffbc000 < 4*GB True >>> 0xfee00000 > 3*GB True >>> 0xfee00000 < 4*GB True Am I missing something? I don't think blocking 0xfe0000000 and above is useful, as there's nothing mapped in that region and AFAIK it's perfectly valid to create memslots there. > > Paolo >
On 24/01/20 19:41, Ben Gardon wrote: > On Fri, Jan 24, 2020 at 12:58 AM Paolo Bonzini <pbonzini@redhat.com> wrote: >> >> On 23/01/20 19:04, Ben Gardon wrote: >>> KVM creates internal memslots covering the region between 3G and 4G in >>> the guest physical address space, when the first vCPU is created. >>> Mapping this region before creation of the first vCPU causes vCPU >>> creation to fail. Prohibit tests from creating such a memslot and fail >>> with a helpful warning when they try to. >>> >>> Signed-off-by: Ben Gardon <bgardon@google.com> >>> --- >> >> The internal memslots are much higher than this (0xfffbc000 and >> 0xfee00000). I'm changing the patch to block 0xfe0000000 and above, >> otherwise it breaks vmx_dirty_log_test. > > Perhaps we're working in different units, but I believe paddrs > 0xfffbc000 and 0xfee00000 are between 3GiB and 4GiB. > "Proof by Python": I invoke the "not a native speaker" card. Rephrasing: there is a large part at the beginning of the area between 3GiB and 4GiB that isn't used by internal memslot (but is used by vmx_dirty_log_test). Though I have no excuse for the extra zero, the range to block is 0xfe000000 to 0x100000000. Paolo >>>> B=1 >>>> KB=1024*B >>>> MB=1024*KB >>>> GB=1024*MB >>>> hex(3*GB) > '0xc0000000' >>>> hex(4*GB) > '0x100000000' >>>> 3*GB == 3<<30 > True >>>> 0xfffbc000 > 3*GB > True >>>> 0xfffbc000 < 4*GB > True >>>> 0xfee00000 > 3*GB > True >>>> 0xfee00000 < 4*GB > True > > Am I missing something? > > I don't think blocking 0xfe0000000 and above is useful, as there's > nothing mapped in that region and AFAIK it's perfectly valid to create > memslots there. > > >> >> Paolo >> >
On Sat, Jan 25, 2020 at 1:37 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > > On 24/01/20 19:41, Ben Gardon wrote: > > On Fri, Jan 24, 2020 at 12:58 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > >> > >> On 23/01/20 19:04, Ben Gardon wrote: > >>> KVM creates internal memslots covering the region between 3G and 4G in > >>> the guest physical address space, when the first vCPU is created. > >>> Mapping this region before creation of the first vCPU causes vCPU > >>> creation to fail. Prohibit tests from creating such a memslot and fail > >>> with a helpful warning when they try to. > >>> > >>> Signed-off-by: Ben Gardon <bgardon@google.com> > >>> --- > >> > >> The internal memslots are much higher than this (0xfffbc000 and > >> 0xfee00000). I'm changing the patch to block 0xfe0000000 and above, > >> otherwise it breaks vmx_dirty_log_test. > > > > Perhaps we're working in different units, but I believe paddrs > > 0xfffbc000 and 0xfee00000 are between 3GiB and 4GiB. > > "Proof by Python": > > I invoke the "not a native speaker" card. Rephrasing: there is a large > part at the beginning of the area between 3GiB and 4GiB that isn't used > by internal memslot (but is used by vmx_dirty_log_test). Ah, that makes perfect sense, thank you for clarifying. I think the 3G-4G in my head may have come from the x86 PCI hole or similar. In any case, reducing the prohibited range to just the range covered by internal memslots feels like a good change. > Though I have no excuse for the extra zero, the range to block is > 0xfe000000 to 0x100000000. > > Paolo > > >>>> B=1 > >>>> KB=1024*B > >>>> MB=1024*KB > >>>> GB=1024*MB > >>>> hex(3*GB) > > '0xc0000000' > >>>> hex(4*GB) > > '0x100000000' > >>>> 3*GB == 3<<30 > > True > >>>> 0xfffbc000 > 3*GB > > True > >>>> 0xfffbc000 < 4*GB > > True > >>>> 0xfee00000 > 3*GB > > True > >>>> 0xfee00000 < 4*GB > > True > > > > Am I missing something? > > > > I don't think blocking 0xfe0000000 and above is useful, as there's > > nothing mapped in that region and AFAIK it's perfectly valid to create > > memslots there. > > > > > >> > >> Paolo > >> > > >
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 41cf45416060f..5b971c04f1643 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -113,6 +113,8 @@ const char * const vm_guest_mode_string[] = { _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES, "Missing new mode strings?"); +#define KVM_INTERNAL_MEMSLOTS_START_PADDR (3UL << 30) +#define KVM_INTERNAL_MEMSLOTS_END_PADDR (4UL << 30) /* * VM Create * @@ -593,6 +595,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, " vm->max_gfn: 0x%lx vm->page_size: 0x%x", guest_paddr, npages, vm->max_gfn, vm->page_size); + /* + * Check that this region does not overlap with KVM internal memslots + * which are created when the first vCPU is created. + */ + TEST_ASSERT(guest_paddr >= KVM_INTERNAL_MEMSLOTS_END_PADDR || + guest_paddr + npages < KVM_INTERNAL_MEMSLOTS_START_PADDR, + "Memslot overlapps with region mapped by internal KVM\n" + "memslots:\n" + " Requested paddr range: [0x%lx, 0x%lx)\n" + " KVM internal memslot range: [0x%lx, 0x%lx)\n", + guest_paddr, guest_paddr + npages, + KVM_INTERNAL_MEMSLOTS_START_PADDR, + KVM_INTERNAL_MEMSLOTS_END_PADDR); + /* * Confirm a mem region with an overlapping address doesn't * already exist.
KVM creates internal memslots covering the region between 3G and 4G in the guest physical address space, when the first vCPU is created. Mapping this region before creation of the first vCPU causes vCPU creation to fail. Prohibit tests from creating such a memslot and fail with a helpful warning when they try to. Signed-off-by: Ben Gardon <bgardon@google.com> --- tools/testing/selftests/kvm/lib/kvm_util.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)