Message ID | 20210130025852.12430-1-andrew.cooper3@citrix.com (mailing list archive) |
---|---|
Headers | show |
Series | acquire_resource size and external IPT monitoring | expand |
On 30.01.21 04:58, Andrew Cooper wrote: Hi Andrew > Combined series (as they are dependent). First, the resource size fixes, and > then the external IPT monitoring built on top. > > Posting in full for reference, but several patches are ready to go in. Those > in need of review are patch 6, 8 and 12. > > See individual patches for changes. The major work was rebasing over the > ARM/IOREQ series which moved a load of code which this series was bugfixing. Looks like that some of these patches have been already merged. So I preliminary tested current staging (9dc687f155a57216b83b17f9cde55dd43e06b0cd x86/debug: fix page-overflow bug in dbg_rw_guest_mem) on Arm *with* IOREQ enabled. I didn't notice any regressions with IOREQ on Arm)) > > Andrew Cooper (7): > xen/memory: Reject out-of-range resource 'frame' values > xen/gnttab: Rework resource acquisition > xen/memory: Fix acquire_resource size semantics > xen/memory: Improve compat XENMEM_acquire_resource handling > xen/memory: Indent part of acquire_resource() > xen/memory: Fix mapping grant tables with XENMEM_acquire_resource > xen+tools: Introduce XEN_SYSCTL_PHYSCAP_vmtrace > > Michał Leszczyński (7): > xen/domain: Add vmtrace_size domain creation parameter > tools/[lib]xl: Add vmtrace_buf_size parameter > xen/memory: Add a vmtrace_buf resource type > x86/vmx: Add Intel Processor Trace support > xen/domctl: Add XEN_DOMCTL_vmtrace_op > tools/libxc: Add xc_vmtrace_* functions > tools/misc: Add xen-vmtrace tool > > Tamas K Lengyel (2): > xen/vmtrace: support for VM forks > x86/vm_event: Carry the vmtrace buffer position in vm_event > > docs/man/xl.cfg.5.pod.in | 9 + > tools/golang/xenlight/helpers.gen.go | 4 + > tools/golang/xenlight/types.gen.go | 2 + > tools/include/libxl.h | 14 ++ > tools/include/xenctrl.h | 73 ++++++++ > tools/libs/ctrl/Makefile | 1 + > tools/libs/ctrl/xc_vmtrace.c | 128 +++++++++++++ > tools/libs/light/libxl.c | 2 + > tools/libs/light/libxl_cpuid.c | 1 + > tools/libs/light/libxl_create.c | 1 + > tools/libs/light/libxl_types.idl | 5 + > tools/misc/.gitignore | 1 + > tools/misc/Makefile | 7 + > tools/misc/xen-cpuid.c | 2 +- > tools/misc/xen-vmtrace.c | 154 ++++++++++++++++ > tools/ocaml/libs/xc/xenctrl.ml | 1 + > tools/ocaml/libs/xc/xenctrl.mli | 1 + > tools/xl/xl_info.c | 5 +- > tools/xl/xl_parse.c | 4 + > xen/arch/x86/domain.c | 23 +++ > xen/arch/x86/domctl.c | 55 ++++++ > xen/arch/x86/hvm/vmx/vmcs.c | 19 +- > xen/arch/x86/hvm/vmx/vmx.c | 200 +++++++++++++++++++- > xen/arch/x86/mm/mem_sharing.c | 3 + > xen/arch/x86/vm_event.c | 3 + > xen/common/compat/memory.c | 147 +++++++++++---- > xen/common/domain.c | 81 ++++++++ > xen/common/grant_table.c | 112 ++++++++---- > xen/common/ioreq.c | 2 +- > xen/common/memory.c | 274 +++++++++++++++++++--------- > xen/common/sysctl.c | 2 + > xen/include/asm-x86/cpufeature.h | 1 + > xen/include/asm-x86/hvm/hvm.h | 72 ++++++++ > xen/include/asm-x86/hvm/vmx/vmcs.h | 4 + > xen/include/asm-x86/msr.h | 32 ++++ > xen/include/public/arch-x86/cpufeatureset.h | 1 + > xen/include/public/domctl.h | 38 ++++ > xen/include/public/memory.h | 18 +- > xen/include/public/sysctl.h | 3 +- > xen/include/public/vm_event.h | 7 + > xen/include/xen/domain.h | 2 + > xen/include/xen/grant_table.h | 21 ++- > xen/include/xen/ioreq.h | 2 +- > xen/include/xen/sched.h | 6 + > xen/xsm/flask/hooks.c | 1 + > 45 files changed, 1366 insertions(+), 178 deletions(-) > create mode 100644 tools/libs/ctrl/xc_vmtrace.c > create mode 100644 tools/misc/xen-vmtrace.c >
On 01/02/2021 12:34, Oleksandr wrote: > > On 30.01.21 04:58, Andrew Cooper wrote: > > Hi Andrew > >> Combined series (as they are dependent). First, the resource size >> fixes, and >> then the external IPT monitoring built on top. >> >> Posting in full for reference, but several patches are ready to go >> in. Those >> in need of review are patch 6, 8 and 12. >> >> See individual patches for changes. The major work was rebasing over >> the >> ARM/IOREQ series which moved a load of code which this series was >> bugfixing. > > Looks like that some of these patches have been already merged. So I > preliminary tested current staging > (9dc687f155a57216b83b17f9cde55dd43e06b0cd x86/debug: fix page-overflow > bug in dbg_rw_guest_mem) on Arm *with* IOREQ enabled. > > I didn't notice any regressions with IOREQ on Arm)) Fantastic! Tamas and I did do extended testing on the subset which got committed, before it went in, and it is all fixing of corner cases, rather than fundamentally changing how things worked. One query I did leave on IRC, and hasn't had an answer. What is the maximum number of vcpus in an ARM guest? You moved an x86-ism "max 128 vcpus" into common code. ~Andrew
On 01.02.21 15:07, Andrew Cooper wrote: Hi Andrew > On 01/02/2021 12:34, Oleksandr wrote: >> On 30.01.21 04:58, Andrew Cooper wrote: >> >> Hi Andrew >> >>> Combined series (as they are dependent). First, the resource size >>> fixes, and >>> then the external IPT monitoring built on top. >>> >>> Posting in full for reference, but several patches are ready to go >>> in. Those >>> in need of review are patch 6, 8 and 12. >>> >>> See individual patches for changes. The major work was rebasing over >>> the >>> ARM/IOREQ series which moved a load of code which this series was >>> bugfixing. >> Looks like that some of these patches have been already merged. So I >> preliminary tested current staging >> (9dc687f155a57216b83b17f9cde55dd43e06b0cd x86/debug: fix page-overflow >> bug in dbg_rw_guest_mem) on Arm *with* IOREQ enabled. >> >> I didn't notice any regressions with IOREQ on Arm)) > Fantastic! > > Tamas and I did do extended testing on the subset which got committed, > before it went in, and it is all fixing of corner cases, rather than > fundamentally changing how things worked. > > > One query I did leave on IRC, and hasn't had an answer. > > What is the maximum number of vcpus in an ARM guest? public/arch-arm.h says that current supported guest VCPUs max number is 128. > You moved an > x86-ism "max 128 vcpus" into common code. Ooh, I am not sure I understand where exactly. Could you please clarify in which patch? > > ~Andrew
On 01/02/2021 13:47, Oleksandr wrote: > > On 01.02.21 15:07, Andrew Cooper wrote: > > Hi Andrew > >> On 01/02/2021 12:34, Oleksandr wrote: >>> On 30.01.21 04:58, Andrew Cooper wrote: >> >> One query I did leave on IRC, and hasn't had an answer. >> >> What is the maximum number of vcpus in an ARM guest? > > public/arch-arm.h says that current supported guest VCPUs max number > is 128. > > >> You moved an >> x86-ism "max 128 vcpus" into common code. > > Ooh, I am not sure I understand where exactly. Could you please > clarify in which patch? ioreq_server_get_frame() hardcodes "there is exactly one non-bufioreq frame", which in practice means there is 128 vcpu's work of struct ioreqs contained within the mapping. I've coded ioreq_server_max_frames() to perform the calculation correctly, but ioreq_server_get_frame() will need fixing by whomever supports more than 128 vcpus with ioreq servers first. ~Andrew
On 01.02.21 16:00, Andrew Cooper wrote: Hi Andrew > On 01/02/2021 13:47, Oleksandr wrote: >> On 01.02.21 15:07, Andrew Cooper wrote: >> >> Hi Andrew >> >>> On 01/02/2021 12:34, Oleksandr wrote: >>>> On 30.01.21 04:58, Andrew Cooper wrote: >>> One query I did leave on IRC, and hasn't had an answer. >>> >>> What is the maximum number of vcpus in an ARM guest? >> public/arch-arm.h says that current supported guest VCPUs max number >> is 128. >> >> >>> You moved an >>> x86-ism "max 128 vcpus" into common code. >> Ooh, I am not sure I understand where exactly. Could you please >> clarify in which patch? > ioreq_server_get_frame() hardcodes "there is exactly one non-bufioreq > frame", which in practice means there is 128 vcpu's work of struct > ioreqs contained within the mapping. > > I've coded ioreq_server_max_frames() to perform the calculation > correctly, but ioreq_server_get_frame() will need fixing by whomever > supports more than 128 vcpus with ioreq servers first. Thank you for the explanation. Now it is clear what you meant.