Message ID | 1453978470-222624-4-git-send-email-imammedo@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Thu, Jan 28, 2016 at 11:54:25AM +0100, Igor Mammedov wrote: > Based on Microsoft's specifications (paper can be > downloaded from http://go.microsoft.com/fwlink/?LinkId=260709, > easily found by "Virtual Machine Generation ID" keywords), > add a PCI device with corresponding description in > SSDT ACPI table. > > The GUID is set using "vmgenid.guid" property or > a corresponding HMP/QMP command. > > Example of using vmgenid device: > -device vmgenid,id=FOO,guid="324e6eaf-d1d1-4bf6-bf41-b9bb6c91fb87" > > 'vmgenid' device initialization flow is as following: > 1. vmgenid has RAM BAR registered with size of GUID buffer > 2. BIOS initializes PCI devices and it maps BAR in PCI hole > 3. BIOS reads ACPI tables from QEMU, at that moment tables > are generated with \_SB.VMGI.ADDR constant pointing to > GPA where BIOS's mapped vmgenid's BAR earlier > > Note: > This implementation uses PCI class 0x0500 code for vmgenid device, > that is marked as NO_DRV in Windows's machine.inf. > Testing various Windows versions showed that, OS > doesn't touch nor checks for resource conflicts > for such PCI devices. > There was concern that during PCI rebalancing, OS > could reprogram the BAR at other place, which would > leave VGEN.ADDR pointing to the old (no more valid) > address. > However testing showed that Windows does rebalancing > only for PCI device that have a driver attached > and completely ignores NO_DRV class of devices. > Which in turn creates a problem where OS could remap > one of PCI devices(with driver) over BAR used by > a driver-less PCI device. > Statically declaring used memory range as VGEN._CRS > makes OS to honor resource reservation and an ignored > BAR range is not longer touched during PCI rebalancing. > > Signed-off-by: Gal Hammer <ghammer@redhat.com> > Signed-off-by: Igor Mammedov <imammedo@redhat.com> It's an interesting hack, but this needs some thought. BIOS has no idea this BAR is special and can not be rebalanced, so it might put the BAR in the middle of the range, in effect fragmenting it. Really I think something like V12 just rewritten using the new APIs (probably with something like build_append_named_dword that I suggested) would be much a simpler way to implement this device, given the weird API limitations. And hey, if you want to use a pci device to pass the physical address guest to host, instead of reserving a couple of IO addresses, sure, stick it in pci config in a vendor-specific capability, this way it'll get migrated automatically. > --- > changes since 17: > - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> > - make BAR prefetchable to make region cached as per MS spec > - s/uuid/guid/ to match spec > changes since 14: > - reserve BAR resources so that Windows won't touch it > during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> > - ACPI: split VGEN device of PCI device descriptor > and place it at PCI0 scope, so that won't be need trace its > location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> > - permit only one vmgenid to be created > - enable BAR be mapped above 4Gb if it can't be mapped at low mem > --- > default-configs/i386-softmmu.mak | 1 + > default-configs/x86_64-softmmu.mak | 1 + > docs/specs/pci-ids.txt | 1 + > hw/i386/acpi-build.c | 56 +++++++++++++- > hw/misc/Makefile.objs | 1 + > hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ > include/hw/misc/vmgenid.h | 27 +++++++ > include/hw/pci/pci.h | 1 + > 8 files changed, 240 insertions(+), 2 deletions(-) > create mode 100644 hw/misc/vmgenid.c > create mode 100644 include/hw/misc/vmgenid.h > > diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak > index b177e52..6402439 100644 > --- a/default-configs/i386-softmmu.mak > +++ b/default-configs/i386-softmmu.mak > @@ -51,6 +51,7 @@ CONFIG_APIC=y > CONFIG_IOAPIC=y > CONFIG_PVPANIC=y > CONFIG_MEM_HOTPLUG=y > +CONFIG_VMGENID=y > CONFIG_NVDIMM=y > CONFIG_ACPI_NVDIMM=y > CONFIG_XIO3130=y > diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak > index 6e3b312..fdac18f 100644 > --- a/default-configs/x86_64-softmmu.mak > +++ b/default-configs/x86_64-softmmu.mak > @@ -51,6 +51,7 @@ CONFIG_APIC=y > CONFIG_IOAPIC=y > CONFIG_PVPANIC=y > CONFIG_MEM_HOTPLUG=y > +CONFIG_VMGENID=y > CONFIG_NVDIMM=y > CONFIG_ACPI_NVDIMM=y > CONFIG_XIO3130=y > diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt > index 0adcb89..e65ecf9 100644 > --- a/docs/specs/pci-ids.txt > +++ b/docs/specs/pci-ids.txt > @@ -47,6 +47,7 @@ PCI devices (other than virtio): > 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) > 1b36:0006 PCI Rocker Ethernet switch device > 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) > +1b36:0009 PCI VM-Generation device > 1b36:000a PCI-PCI bridge (multiseat) > > All these devices are documented in docs/specs. > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c > index 78758e2..0187262 100644 > --- a/hw/i386/acpi-build.c > +++ b/hw/i386/acpi-build.c > @@ -44,6 +44,7 @@ > #include "hw/acpi/tpm.h" > #include "sysemu/tpm_backend.h" > #include "hw/timer/mc146818rtc_regs.h" > +#include "hw/misc/vmgenid.h" > > /* Supported chipsets: */ > #include "hw/acpi/piix4.h" > @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) > info->applesmc_io_base = applesmc_port(); > } > > +static Aml *build_vmgenid_device(uint64_t buf_paddr) > +{ > + Aml *dev, *pkg, *crs; > + > + dev = aml_device("VGEN"); > + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); > + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); > + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); > + > + pkg = aml_package(2); > + /* low 32 bits of UUID buffer addr */ > + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); > + /* high 32 bits of UUID buffer addr */ > + aml_append(pkg, aml_int(buf_paddr >> 32)); > + aml_append(dev, aml_name_decl("ADDR", pkg)); > + > + /* > + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows > + * displays it as "PCI RAM controller" which is marked as NO_DRV > + * so Windows ignores VMGEN device completely and doesn't check > + * for resource conflicts which during PCI rebalancing can lead > + * to another PCI device claiming ignored BARs. To prevent this > + * statically reserve resources used by VM_Gen_Counter. > + * For more verbose comment see this commit message. What does "this commit message" mean? > + */ > + crs = aml_resource_template(); > + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, > + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, > + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, > + VMGENID_VMGID_BUF_SIZE)); > + aml_append(dev, aml_name_decl("_CRS", crs)); > + return dev; > +} > + > /* > * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. > * On i386 arch we only have two pci hosts, so we can look only for them. > @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, > } > > if (bus) { > + Object *vmgen; > Aml *scope = aml_scope("PCI0"); > /* Scan all PCI buses. Generate tables to support hotplug. */ > build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); > @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, > aml_append(scope, dev); > } > > + vmgen = find_vmgneid_dev(NULL); > + if (vmgen) { > + PCIDevice *pdev = PCI_DEVICE(vmgen); > + uint64_t buf_paddr = > + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); > + > + if (buf_paddr != PCI_BAR_UNMAPPED) { > + aml_append(scope, build_vmgenid_device(buf_paddr)); > + > + method = aml_method("\\_GPE._E00", 0, > + AML_NOTSERIALIZED); > + aml_append(method, > + aml_notify(aml_name("\\_SB.PCI0.VGEN"), > + aml_int(0x80))); > + aml_append(ssdt, method); > + } > + } > + > aml_append(sb_scope, scope); > } > } > @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, > { > aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); > > - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); > - > if (misc->is_piix4) { > method = aml_method("_E01", 0, AML_NOTSERIALIZED); > aml_append(method, > diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs > index d4765c2..1f05edd 100644 > --- a/hw/misc/Makefile.objs > +++ b/hw/misc/Makefile.objs > @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o > > obj-$(CONFIG_PVPANIC) += pvpanic.o > obj-$(CONFIG_EDU) += edu.o > +obj-$(CONFIG_VMGENID) += vmgenid.o > obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o > diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c > new file mode 100644 > index 0000000..a2fbdfc > --- /dev/null > +++ b/hw/misc/vmgenid.c > @@ -0,0 +1,154 @@ > +/* > + * Virtual Machine Generation ID Device > + * > + * Copyright (C) 2016 Red Hat Inc. > + * > + * Authors: Gal Hammer <ghammer@redhat.com> > + * Igor Mammedov <imammedo@redhat.com> > + * > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > + * See the COPYING file in the top-level directory. > + * > + */ > + > +#include "hw/i386/pc.h" > +#include "hw/pci/pci.h" > +#include "hw/misc/vmgenid.h" > +#include "hw/acpi/acpi.h" > +#include "qapi/visitor.h" > + > +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) > + > +typedef struct VmGenIdState { > + PCIDevice parent_obj; > + MemoryRegion iomem; > + union { > + uint8_t guid[16]; > + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; > + }; > + bool guid_set; > +} VmGenIdState; > + > +Object *find_vmgneid_dev(Error **errp) > +{ > + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); > + if (!obj) { > + error_setg(errp, VMGENID_DEVICE " is not found"); > + } > + return obj; > +} > + > +static void vmgenid_update_guest(VmGenIdState *s) > +{ > + Object *acpi_obj; > + void *ptr = memory_region_get_ram_ptr(&s->iomem); > + > + memcpy(ptr, &s->guid, sizeof(s->guid)); > + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); > + > + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); > + if (acpi_obj) { > + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); > + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); > + ACPIREGS *acpi_regs = adevc->regs(adev); > + > + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ > + acpi_update_sci(acpi_regs, adevc->sci(adev)); > + } > +} > + > +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) > +{ > + VmGenIdState *s = VMGENID(obj); > + > + if (qemu_uuid_parse(value, s->guid) < 0) { > + error_setg(errp, "'%s." VMGENID_GUID > + "': Failed to parse GUID string: %s", > + object_get_typename(OBJECT(s)), > + value); > + return; > + } > + > + s->guid_set = true; > + vmgenid_update_guest(s); > +} > + > +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, > + const char *name, Error **errp) > +{ > + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); > + > + if (value == PCI_BAR_UNMAPPED) { > + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", > + object_get_typename(OBJECT(obj))); > + return; > + } > + visit_type_int(v, &value, name, errp); > +} > + > +static void vmgenid_initfn(Object *obj) > +{ > + VmGenIdState *s = VMGENID(obj); > + > + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), > + &error_abort); > + > + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); > + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", > + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); > +} > + > + > +static void vmgenid_realize(PCIDevice *dev, Error **errp) > +{ > + VmGenIdState *s = VMGENID(dev); > + bool ambiguous = false; > + > + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); > + if (ambiguous) { > + error_setg(errp, "no more than one " VMGENID_DEVICE > + " device is permitted"); > + return; > + } > + > + if (!s->guid_set) { > + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", > + object_get_typename(OBJECT(s))); > + return; > + } > + > + vmstate_register_ram(&s->iomem, DEVICE(s)); > + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, > + PCI_BASE_ADDRESS_MEM_PREFETCH | > + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > + &s->iomem); > + return; > +} > + > +static void vmgenid_class_init(ObjectClass *klass, void *data) > +{ > + DeviceClass *dc = DEVICE_CLASS(klass); > + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); > + > + set_bit(DEVICE_CATEGORY_MISC, dc->categories); > + dc->hotpluggable = false; > + k->realize = vmgenid_realize; > + k->vendor_id = PCI_VENDOR_ID_REDHAT; > + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; > + k->class_id = PCI_CLASS_MEMORY_RAM; > +} > + > +static const TypeInfo vmgenid_device_info = { > + .name = VMGENID_DEVICE, > + .parent = TYPE_PCI_DEVICE, > + .instance_size = sizeof(VmGenIdState), > + .instance_init = vmgenid_initfn, > + .class_init = vmgenid_class_init, > +}; > + > +static void vmgenid_register_types(void) > +{ > + type_register_static(&vmgenid_device_info); > +} > + > +type_init(vmgenid_register_types) > diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h > new file mode 100644 > index 0000000..b90882c > --- /dev/null > +++ b/include/hw/misc/vmgenid.h > @@ -0,0 +1,27 @@ > +/* > + * Virtual Machine Generation ID Device > + * > + * Copyright (C) 2016 Red Hat Inc. > + * > + * Authors: Gal Hammer <ghammer@redhat.com> > + * Igor Mammedov <imammedo@redhat.com> > + * > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > + * See the COPYING file in the top-level directory. > + * > + */ > + > +#ifndef HW_MISC_VMGENID_H > +#define HW_MISC_VMGENID_H > + > +#include "qom/object.h" > + > +#define VMGENID_DEVICE "vmgenid" > +#define VMGENID_GUID "guid" > +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" > +#define VMGENID_VMGID_BUF_SIZE 0x1000 > +#define VMGENID_VMGID_BUF_BAR 0 > + > +Object *find_vmgneid_dev(Error **errp); > + > +#endif > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > index dedf277..f4c9d48 100644 > --- a/include/hw/pci/pci.h > +++ b/include/hw/pci/pci.h > @@ -94,6 +94,7 @@ > #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 > #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a > #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b > +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c > #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 > > #define FMT_PCIBUS PRIx64 > -- > 1.8.3.1
On Thu, 28 Jan 2016 13:13:04 +0200 "Michael S. Tsirkin" <mst@redhat.com> wrote: > On Thu, Jan 28, 2016 at 11:54:25AM +0100, Igor Mammedov wrote: > > Based on Microsoft's specifications (paper can be > > downloaded from http://go.microsoft.com/fwlink/?LinkId=260709, > > easily found by "Virtual Machine Generation ID" keywords), > > add a PCI device with corresponding description in > > SSDT ACPI table. > > > > The GUID is set using "vmgenid.guid" property or > > a corresponding HMP/QMP command. > > > > Example of using vmgenid device: > > -device vmgenid,id=FOO,guid="324e6eaf-d1d1-4bf6-bf41-b9bb6c91fb87" > > > > 'vmgenid' device initialization flow is as following: > > 1. vmgenid has RAM BAR registered with size of GUID buffer > > 2. BIOS initializes PCI devices and it maps BAR in PCI hole > > 3. BIOS reads ACPI tables from QEMU, at that moment tables > > are generated with \_SB.VMGI.ADDR constant pointing to > > GPA where BIOS's mapped vmgenid's BAR earlier > > > > Note: > > This implementation uses PCI class 0x0500 code for vmgenid device, > > that is marked as NO_DRV in Windows's machine.inf. > > Testing various Windows versions showed that, OS > > doesn't touch nor checks for resource conflicts > > for such PCI devices. > > There was concern that during PCI rebalancing, OS > > could reprogram the BAR at other place, which would > > leave VGEN.ADDR pointing to the old (no more valid) > > address. > > However testing showed that Windows does rebalancing > > only for PCI device that have a driver attached > > and completely ignores NO_DRV class of devices. > > Which in turn creates a problem where OS could remap > > one of PCI devices(with driver) over BAR used by > > a driver-less PCI device. > > Statically declaring used memory range as VGEN._CRS > > makes OS to honor resource reservation and an ignored > > BAR range is not longer touched during PCI rebalancing. > > > > Signed-off-by: Gal Hammer <ghammer@redhat.com> > > Signed-off-by: Igor Mammedov <imammedo@redhat.com> > > It's an interesting hack, but this needs some thought. BIOS has no idea > this BAR is special and can not be rebalanced, so it might put the BAR > in the middle of the range, in effect fragmenting it. yep that's the only drawback in PCI approach. > Really I think something like V12 just rewritten using the new APIs > (probably with something like build_append_named_dword that I suggested) > would be much a simpler way to implement this device, given > the weird API limitations. We went over stating drawbacks of both approaches several times and that's where I strongly disagree with using v12 AML patching approach for reasons stated in those discussions. > And hey, if you want to use a pci device to pass the physical > address guest to host, instead of reserving > a couple of IO addresses, sure, stick it in pci config in > a vendor-specific capability, this way it'll get migrated > automatically. Could you elaborate more on this suggestion? > > > > --- > > changes since 17: > > - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> > > - make BAR prefetchable to make region cached as per MS spec > > - s/uuid/guid/ to match spec > > changes since 14: > > - reserve BAR resources so that Windows won't touch it > > during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> > > - ACPI: split VGEN device of PCI device descriptor > > and place it at PCI0 scope, so that won't be need trace its > > location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> > > - permit only one vmgenid to be created > > - enable BAR be mapped above 4Gb if it can't be mapped at low mem > > --- > > default-configs/i386-softmmu.mak | 1 + > > default-configs/x86_64-softmmu.mak | 1 + > > docs/specs/pci-ids.txt | 1 + > > hw/i386/acpi-build.c | 56 +++++++++++++- > > hw/misc/Makefile.objs | 1 + > > hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ > > include/hw/misc/vmgenid.h | 27 +++++++ > > include/hw/pci/pci.h | 1 + > > 8 files changed, 240 insertions(+), 2 deletions(-) > > create mode 100644 hw/misc/vmgenid.c > > create mode 100644 include/hw/misc/vmgenid.h > > > > diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak > > index b177e52..6402439 100644 > > --- a/default-configs/i386-softmmu.mak > > +++ b/default-configs/i386-softmmu.mak > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > CONFIG_IOAPIC=y > > CONFIG_PVPANIC=y > > CONFIG_MEM_HOTPLUG=y > > +CONFIG_VMGENID=y > > CONFIG_NVDIMM=y > > CONFIG_ACPI_NVDIMM=y > > CONFIG_XIO3130=y > > diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak > > index 6e3b312..fdac18f 100644 > > --- a/default-configs/x86_64-softmmu.mak > > +++ b/default-configs/x86_64-softmmu.mak > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > CONFIG_IOAPIC=y > > CONFIG_PVPANIC=y > > CONFIG_MEM_HOTPLUG=y > > +CONFIG_VMGENID=y > > CONFIG_NVDIMM=y > > CONFIG_ACPI_NVDIMM=y > > CONFIG_XIO3130=y > > diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt > > index 0adcb89..e65ecf9 100644 > > --- a/docs/specs/pci-ids.txt > > +++ b/docs/specs/pci-ids.txt > > @@ -47,6 +47,7 @@ PCI devices (other than virtio): > > 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) > > 1b36:0006 PCI Rocker Ethernet switch device > > 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) > > +1b36:0009 PCI VM-Generation device > > 1b36:000a PCI-PCI bridge (multiseat) > > > > All these devices are documented in docs/specs. > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c > > index 78758e2..0187262 100644 > > --- a/hw/i386/acpi-build.c > > +++ b/hw/i386/acpi-build.c > > @@ -44,6 +44,7 @@ > > #include "hw/acpi/tpm.h" > > #include "sysemu/tpm_backend.h" > > #include "hw/timer/mc146818rtc_regs.h" > > +#include "hw/misc/vmgenid.h" > > > > /* Supported chipsets: */ > > #include "hw/acpi/piix4.h" > > @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) > > info->applesmc_io_base = applesmc_port(); > > } > > > > +static Aml *build_vmgenid_device(uint64_t buf_paddr) > > +{ > > + Aml *dev, *pkg, *crs; > > + > > + dev = aml_device("VGEN"); > > + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); > > + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); > > + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); > > + > > + pkg = aml_package(2); > > + /* low 32 bits of UUID buffer addr */ > > + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); > > + /* high 32 bits of UUID buffer addr */ > > + aml_append(pkg, aml_int(buf_paddr >> 32)); > > + aml_append(dev, aml_name_decl("ADDR", pkg)); > > + > > + /* > > + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows > > + * displays it as "PCI RAM controller" which is marked as NO_DRV > > + * so Windows ignores VMGEN device completely and doesn't check > > + * for resource conflicts which during PCI rebalancing can lead > > + * to another PCI device claiming ignored BARs. To prevent this > > + * statically reserve resources used by VM_Gen_Counter. > > + * For more verbose comment see this commit message. > > What does "this commit message" mean? above commit message. Should I reword it to just 'see commit message' > > > + */ > > + crs = aml_resource_template(); > > + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, > > + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, > > + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, > > + VMGENID_VMGID_BUF_SIZE)); > > + aml_append(dev, aml_name_decl("_CRS", crs)); > > + return dev; > > +} > > + > > /* > > * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. > > * On i386 arch we only have two pci hosts, so we can look only for them. > > @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, > > } > > > > if (bus) { > > + Object *vmgen; > > Aml *scope = aml_scope("PCI0"); > > /* Scan all PCI buses. Generate tables to support hotplug. */ > > build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); > > @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, > > aml_append(scope, dev); > > } > > > > + vmgen = find_vmgneid_dev(NULL); > > + if (vmgen) { > > + PCIDevice *pdev = PCI_DEVICE(vmgen); > > + uint64_t buf_paddr = > > + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); > > + > > + if (buf_paddr != PCI_BAR_UNMAPPED) { > > + aml_append(scope, build_vmgenid_device(buf_paddr)); > > + > > + method = aml_method("\\_GPE._E00", 0, > > + AML_NOTSERIALIZED); > > + aml_append(method, > > + aml_notify(aml_name("\\_SB.PCI0.VGEN"), > > + aml_int(0x80))); > > + aml_append(ssdt, method); > > + } > > + } > > + > > aml_append(sb_scope, scope); > > } > > } > > @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, > > { > > aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); > > > > - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); > > - > > if (misc->is_piix4) { > > method = aml_method("_E01", 0, AML_NOTSERIALIZED); > > aml_append(method, > > diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs > > index d4765c2..1f05edd 100644 > > --- a/hw/misc/Makefile.objs > > +++ b/hw/misc/Makefile.objs > > @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o > > > > obj-$(CONFIG_PVPANIC) += pvpanic.o > > obj-$(CONFIG_EDU) += edu.o > > +obj-$(CONFIG_VMGENID) += vmgenid.o > > obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o > > diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c > > new file mode 100644 > > index 0000000..a2fbdfc > > --- /dev/null > > +++ b/hw/misc/vmgenid.c > > @@ -0,0 +1,154 @@ > > +/* > > + * Virtual Machine Generation ID Device > > + * > > + * Copyright (C) 2016 Red Hat Inc. > > + * > > + * Authors: Gal Hammer <ghammer@redhat.com> > > + * Igor Mammedov <imammedo@redhat.com> > > + * > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > + * See the COPYING file in the top-level directory. > > + * > > + */ > > + > > +#include "hw/i386/pc.h" > > +#include "hw/pci/pci.h" > > +#include "hw/misc/vmgenid.h" > > +#include "hw/acpi/acpi.h" > > +#include "qapi/visitor.h" > > + > > +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) > > + > > +typedef struct VmGenIdState { > > + PCIDevice parent_obj; > > + MemoryRegion iomem; > > + union { > > + uint8_t guid[16]; > > + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; > > + }; > > + bool guid_set; > > +} VmGenIdState; > > + > > +Object *find_vmgneid_dev(Error **errp) > > +{ > > + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); > > + if (!obj) { > > + error_setg(errp, VMGENID_DEVICE " is not found"); > > + } > > + return obj; > > +} > > + > > +static void vmgenid_update_guest(VmGenIdState *s) > > +{ > > + Object *acpi_obj; > > + void *ptr = memory_region_get_ram_ptr(&s->iomem); > > + > > + memcpy(ptr, &s->guid, sizeof(s->guid)); > > + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); > > + > > + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); > > + if (acpi_obj) { > > + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); > > + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); > > + ACPIREGS *acpi_regs = adevc->regs(adev); > > + > > + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ > > + acpi_update_sci(acpi_regs, adevc->sci(adev)); > > + } > > +} > > + > > +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) > > +{ > > + VmGenIdState *s = VMGENID(obj); > > + > > + if (qemu_uuid_parse(value, s->guid) < 0) { > > + error_setg(errp, "'%s." VMGENID_GUID > > + "': Failed to parse GUID string: %s", > > + object_get_typename(OBJECT(s)), > > + value); > > + return; > > + } > > + > > + s->guid_set = true; > > + vmgenid_update_guest(s); > > +} > > + > > +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, > > + const char *name, Error **errp) > > +{ > > + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); > > + > > + if (value == PCI_BAR_UNMAPPED) { > > + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", > > + object_get_typename(OBJECT(obj))); > > + return; > > + } > > + visit_type_int(v, &value, name, errp); > > +} > > + > > +static void vmgenid_initfn(Object *obj) > > +{ > > + VmGenIdState *s = VMGENID(obj); > > + > > + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), > > + &error_abort); > > + > > + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); > > + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", > > + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); > > +} > > + > > + > > +static void vmgenid_realize(PCIDevice *dev, Error **errp) > > +{ > > + VmGenIdState *s = VMGENID(dev); > > + bool ambiguous = false; > > + > > + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); > > + if (ambiguous) { > > + error_setg(errp, "no more than one " VMGENID_DEVICE > > + " device is permitted"); > > + return; > > + } > > + > > + if (!s->guid_set) { > > + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", > > + object_get_typename(OBJECT(s))); > > + return; > > + } > > + > > + vmstate_register_ram(&s->iomem, DEVICE(s)); > > + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, > > + PCI_BASE_ADDRESS_MEM_PREFETCH | > > + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > > + &s->iomem); > > + return; > > +} > > + > > +static void vmgenid_class_init(ObjectClass *klass, void *data) > > +{ > > + DeviceClass *dc = DEVICE_CLASS(klass); > > + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); > > + > > + set_bit(DEVICE_CATEGORY_MISC, dc->categories); > > + dc->hotpluggable = false; > > + k->realize = vmgenid_realize; > > + k->vendor_id = PCI_VENDOR_ID_REDHAT; > > + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; > > + k->class_id = PCI_CLASS_MEMORY_RAM; > > +} > > + > > +static const TypeInfo vmgenid_device_info = { > > + .name = VMGENID_DEVICE, > > + .parent = TYPE_PCI_DEVICE, > > + .instance_size = sizeof(VmGenIdState), > > + .instance_init = vmgenid_initfn, > > + .class_init = vmgenid_class_init, > > +}; > > + > > +static void vmgenid_register_types(void) > > +{ > > + type_register_static(&vmgenid_device_info); > > +} > > + > > +type_init(vmgenid_register_types) > > diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h > > new file mode 100644 > > index 0000000..b90882c > > --- /dev/null > > +++ b/include/hw/misc/vmgenid.h > > @@ -0,0 +1,27 @@ > > +/* > > + * Virtual Machine Generation ID Device > > + * > > + * Copyright (C) 2016 Red Hat Inc. > > + * > > + * Authors: Gal Hammer <ghammer@redhat.com> > > + * Igor Mammedov <imammedo@redhat.com> > > + * > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > + * See the COPYING file in the top-level directory. > > + * > > + */ > > + > > +#ifndef HW_MISC_VMGENID_H > > +#define HW_MISC_VMGENID_H > > + > > +#include "qom/object.h" > > + > > +#define VMGENID_DEVICE "vmgenid" > > +#define VMGENID_GUID "guid" > > +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" > > +#define VMGENID_VMGID_BUF_SIZE 0x1000 > > +#define VMGENID_VMGID_BUF_BAR 0 > > + > > +Object *find_vmgneid_dev(Error **errp); > > + > > +#endif > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > > index dedf277..f4c9d48 100644 > > --- a/include/hw/pci/pci.h > > +++ b/include/hw/pci/pci.h > > @@ -94,6 +94,7 @@ > > #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 > > #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a > > #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b > > +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c > > #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 > > > > #define FMT_PCIBUS PRIx64 > > -- > > 1.8.3.1
On Thu, Jan 28, 2016 at 01:03:16PM +0100, Igor Mammedov wrote: > On Thu, 28 Jan 2016 13:13:04 +0200 > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > On Thu, Jan 28, 2016 at 11:54:25AM +0100, Igor Mammedov wrote: > > > Based on Microsoft's specifications (paper can be > > > downloaded from http://go.microsoft.com/fwlink/?LinkId=260709, > > > easily found by "Virtual Machine Generation ID" keywords), > > > add a PCI device with corresponding description in > > > SSDT ACPI table. > > > > > > The GUID is set using "vmgenid.guid" property or > > > a corresponding HMP/QMP command. > > > > > > Example of using vmgenid device: > > > -device vmgenid,id=FOO,guid="324e6eaf-d1d1-4bf6-bf41-b9bb6c91fb87" > > > > > > 'vmgenid' device initialization flow is as following: > > > 1. vmgenid has RAM BAR registered with size of GUID buffer > > > 2. BIOS initializes PCI devices and it maps BAR in PCI hole > > > 3. BIOS reads ACPI tables from QEMU, at that moment tables > > > are generated with \_SB.VMGI.ADDR constant pointing to > > > GPA where BIOS's mapped vmgenid's BAR earlier > > > > > > Note: > > > This implementation uses PCI class 0x0500 code for vmgenid device, > > > that is marked as NO_DRV in Windows's machine.inf. > > > Testing various Windows versions showed that, OS > > > doesn't touch nor checks for resource conflicts > > > for such PCI devices. > > > There was concern that during PCI rebalancing, OS > > > could reprogram the BAR at other place, which would > > > leave VGEN.ADDR pointing to the old (no more valid) > > > address. > > > However testing showed that Windows does rebalancing > > > only for PCI device that have a driver attached > > > and completely ignores NO_DRV class of devices. > > > Which in turn creates a problem where OS could remap > > > one of PCI devices(with driver) over BAR used by > > > a driver-less PCI device. > > > Statically declaring used memory range as VGEN._CRS > > > makes OS to honor resource reservation and an ignored > > > BAR range is not longer touched during PCI rebalancing. > > > > > > Signed-off-by: Gal Hammer <ghammer@redhat.com> > > > Signed-off-by: Igor Mammedov <imammedo@redhat.com> > > > > It's an interesting hack, but this needs some thought. BIOS has no idea > > this BAR is special and can not be rebalanced, so it might put the BAR > > in the middle of the range, in effect fragmenting it. > yep that's the only drawback in PCI approach. > > > Really I think something like V12 just rewritten using the new APIs > > (probably with something like build_append_named_dword that I suggested) > > would be much a simpler way to implement this device, given > > the weird API limitations. > We went over stating drawbacks of both approaches several times > and that's where I strongly disagree with using v12 AML patching > approach for reasons stated in those discussions. Yes, IIRC you dislike the need to allocate an IO range to pass address to host, and to have costom code to migrate the address. > > And hey, if you want to use a pci device to pass the physical > > address guest to host, instead of reserving > > a couple of IO addresses, sure, stick it in pci config in > > a vendor-specific capability, this way it'll get migrated > > automatically. > Could you elaborate more on this suggestion? I really just mean using PCI_Config operation region. If you wish, I'll try to post a prototype next week. > > > > > > > --- > > > changes since 17: > > > - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> > > > - make BAR prefetchable to make region cached as per MS spec > > > - s/uuid/guid/ to match spec > > > changes since 14: > > > - reserve BAR resources so that Windows won't touch it > > > during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> > > > - ACPI: split VGEN device of PCI device descriptor > > > and place it at PCI0 scope, so that won't be need trace its > > > location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> > > > - permit only one vmgenid to be created > > > - enable BAR be mapped above 4Gb if it can't be mapped at low mem > > > --- > > > default-configs/i386-softmmu.mak | 1 + > > > default-configs/x86_64-softmmu.mak | 1 + > > > docs/specs/pci-ids.txt | 1 + > > > hw/i386/acpi-build.c | 56 +++++++++++++- > > > hw/misc/Makefile.objs | 1 + > > > hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ > > > include/hw/misc/vmgenid.h | 27 +++++++ > > > include/hw/pci/pci.h | 1 + > > > 8 files changed, 240 insertions(+), 2 deletions(-) > > > create mode 100644 hw/misc/vmgenid.c > > > create mode 100644 include/hw/misc/vmgenid.h > > > > > > diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak > > > index b177e52..6402439 100644 > > > --- a/default-configs/i386-softmmu.mak > > > +++ b/default-configs/i386-softmmu.mak > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > CONFIG_IOAPIC=y > > > CONFIG_PVPANIC=y > > > CONFIG_MEM_HOTPLUG=y > > > +CONFIG_VMGENID=y > > > CONFIG_NVDIMM=y > > > CONFIG_ACPI_NVDIMM=y > > > CONFIG_XIO3130=y > > > diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak > > > index 6e3b312..fdac18f 100644 > > > --- a/default-configs/x86_64-softmmu.mak > > > +++ b/default-configs/x86_64-softmmu.mak > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > CONFIG_IOAPIC=y > > > CONFIG_PVPANIC=y > > > CONFIG_MEM_HOTPLUG=y > > > +CONFIG_VMGENID=y > > > CONFIG_NVDIMM=y > > > CONFIG_ACPI_NVDIMM=y > > > CONFIG_XIO3130=y > > > diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt > > > index 0adcb89..e65ecf9 100644 > > > --- a/docs/specs/pci-ids.txt > > > +++ b/docs/specs/pci-ids.txt > > > @@ -47,6 +47,7 @@ PCI devices (other than virtio): > > > 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) > > > 1b36:0006 PCI Rocker Ethernet switch device > > > 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) > > > +1b36:0009 PCI VM-Generation device > > > 1b36:000a PCI-PCI bridge (multiseat) > > > > > > All these devices are documented in docs/specs. > > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c > > > index 78758e2..0187262 100644 > > > --- a/hw/i386/acpi-build.c > > > +++ b/hw/i386/acpi-build.c > > > @@ -44,6 +44,7 @@ > > > #include "hw/acpi/tpm.h" > > > #include "sysemu/tpm_backend.h" > > > #include "hw/timer/mc146818rtc_regs.h" > > > +#include "hw/misc/vmgenid.h" > > > > > > /* Supported chipsets: */ > > > #include "hw/acpi/piix4.h" > > > @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) > > > info->applesmc_io_base = applesmc_port(); > > > } > > > > > > +static Aml *build_vmgenid_device(uint64_t buf_paddr) > > > +{ > > > + Aml *dev, *pkg, *crs; > > > + > > > + dev = aml_device("VGEN"); > > > + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); > > > + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); > > > + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); > > > + > > > + pkg = aml_package(2); > > > + /* low 32 bits of UUID buffer addr */ > > > + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); > > > + /* high 32 bits of UUID buffer addr */ > > > + aml_append(pkg, aml_int(buf_paddr >> 32)); > > > + aml_append(dev, aml_name_decl("ADDR", pkg)); > > > + > > > + /* > > > + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows > > > + * displays it as "PCI RAM controller" which is marked as NO_DRV > > > + * so Windows ignores VMGEN device completely and doesn't check > > > + * for resource conflicts which during PCI rebalancing can lead > > > + * to another PCI device claiming ignored BARs. To prevent this > > > + * statically reserve resources used by VM_Gen_Counter. > > > + * For more verbose comment see this commit message. > > > > What does "this commit message" mean? > above commit message. Should I reword it to just 'see commit message' > > > > > > + */ > > > + crs = aml_resource_template(); > > > + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, > > > + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, > > > + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, > > > + VMGENID_VMGID_BUF_SIZE)); > > > + aml_append(dev, aml_name_decl("_CRS", crs)); > > > + return dev; > > > +} > > > + > > > /* > > > * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. > > > * On i386 arch we only have two pci hosts, so we can look only for them. > > > @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, > > > } > > > > > > if (bus) { > > > + Object *vmgen; > > > Aml *scope = aml_scope("PCI0"); > > > /* Scan all PCI buses. Generate tables to support hotplug. */ > > > build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); > > > @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, > > > aml_append(scope, dev); > > > } > > > > > > + vmgen = find_vmgneid_dev(NULL); > > > + if (vmgen) { > > > + PCIDevice *pdev = PCI_DEVICE(vmgen); > > > + uint64_t buf_paddr = > > > + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); > > > + > > > + if (buf_paddr != PCI_BAR_UNMAPPED) { > > > + aml_append(scope, build_vmgenid_device(buf_paddr)); > > > + > > > + method = aml_method("\\_GPE._E00", 0, > > > + AML_NOTSERIALIZED); > > > + aml_append(method, > > > + aml_notify(aml_name("\\_SB.PCI0.VGEN"), > > > + aml_int(0x80))); > > > + aml_append(ssdt, method); > > > + } > > > + } > > > + > > > aml_append(sb_scope, scope); > > > } > > > } > > > @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, > > > { > > > aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); > > > > > > - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); > > > - > > > if (misc->is_piix4) { > > > method = aml_method("_E01", 0, AML_NOTSERIALIZED); > > > aml_append(method, > > > diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs > > > index d4765c2..1f05edd 100644 > > > --- a/hw/misc/Makefile.objs > > > +++ b/hw/misc/Makefile.objs > > > @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o > > > > > > obj-$(CONFIG_PVPANIC) += pvpanic.o > > > obj-$(CONFIG_EDU) += edu.o > > > +obj-$(CONFIG_VMGENID) += vmgenid.o > > > obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o > > > diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c > > > new file mode 100644 > > > index 0000000..a2fbdfc > > > --- /dev/null > > > +++ b/hw/misc/vmgenid.c > > > @@ -0,0 +1,154 @@ > > > +/* > > > + * Virtual Machine Generation ID Device > > > + * > > > + * Copyright (C) 2016 Red Hat Inc. > > > + * > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > + * Igor Mammedov <imammedo@redhat.com> > > > + * > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > + * See the COPYING file in the top-level directory. > > > + * > > > + */ > > > + > > > +#include "hw/i386/pc.h" > > > +#include "hw/pci/pci.h" > > > +#include "hw/misc/vmgenid.h" > > > +#include "hw/acpi/acpi.h" > > > +#include "qapi/visitor.h" > > > + > > > +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) > > > + > > > +typedef struct VmGenIdState { > > > + PCIDevice parent_obj; > > > + MemoryRegion iomem; > > > + union { > > > + uint8_t guid[16]; > > > + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; > > > + }; > > > + bool guid_set; > > > +} VmGenIdState; > > > + > > > +Object *find_vmgneid_dev(Error **errp) > > > +{ > > > + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); > > > + if (!obj) { > > > + error_setg(errp, VMGENID_DEVICE " is not found"); > > > + } > > > + return obj; > > > +} > > > + > > > +static void vmgenid_update_guest(VmGenIdState *s) > > > +{ > > > + Object *acpi_obj; > > > + void *ptr = memory_region_get_ram_ptr(&s->iomem); > > > + > > > + memcpy(ptr, &s->guid, sizeof(s->guid)); > > > + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); > > > + > > > + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); > > > + if (acpi_obj) { > > > + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); > > > + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); > > > + ACPIREGS *acpi_regs = adevc->regs(adev); > > > + > > > + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ > > > + acpi_update_sci(acpi_regs, adevc->sci(adev)); > > > + } > > > +} > > > + > > > +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) > > > +{ > > > + VmGenIdState *s = VMGENID(obj); > > > + > > > + if (qemu_uuid_parse(value, s->guid) < 0) { > > > + error_setg(errp, "'%s." VMGENID_GUID > > > + "': Failed to parse GUID string: %s", > > > + object_get_typename(OBJECT(s)), > > > + value); > > > + return; > > > + } > > > + > > > + s->guid_set = true; > > > + vmgenid_update_guest(s); > > > +} > > > + > > > +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, > > > + const char *name, Error **errp) > > > +{ > > > + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); > > > + > > > + if (value == PCI_BAR_UNMAPPED) { > > > + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", > > > + object_get_typename(OBJECT(obj))); > > > + return; > > > + } > > > + visit_type_int(v, &value, name, errp); > > > +} > > > + > > > +static void vmgenid_initfn(Object *obj) > > > +{ > > > + VmGenIdState *s = VMGENID(obj); > > > + > > > + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), > > > + &error_abort); > > > + > > > + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); > > > + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", > > > + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); > > > +} > > > + > > > + > > > +static void vmgenid_realize(PCIDevice *dev, Error **errp) > > > +{ > > > + VmGenIdState *s = VMGENID(dev); > > > + bool ambiguous = false; > > > + > > > + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); > > > + if (ambiguous) { > > > + error_setg(errp, "no more than one " VMGENID_DEVICE > > > + " device is permitted"); > > > + return; > > > + } > > > + > > > + if (!s->guid_set) { > > > + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", > > > + object_get_typename(OBJECT(s))); > > > + return; > > > + } > > > + > > > + vmstate_register_ram(&s->iomem, DEVICE(s)); > > > + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, > > > + PCI_BASE_ADDRESS_MEM_PREFETCH | > > > + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > > > + &s->iomem); > > > + return; > > > +} > > > + > > > +static void vmgenid_class_init(ObjectClass *klass, void *data) > > > +{ > > > + DeviceClass *dc = DEVICE_CLASS(klass); > > > + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); > > > + > > > + set_bit(DEVICE_CATEGORY_MISC, dc->categories); > > > + dc->hotpluggable = false; > > > + k->realize = vmgenid_realize; > > > + k->vendor_id = PCI_VENDOR_ID_REDHAT; > > > + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; > > > + k->class_id = PCI_CLASS_MEMORY_RAM; > > > +} > > > + > > > +static const TypeInfo vmgenid_device_info = { > > > + .name = VMGENID_DEVICE, > > > + .parent = TYPE_PCI_DEVICE, > > > + .instance_size = sizeof(VmGenIdState), > > > + .instance_init = vmgenid_initfn, > > > + .class_init = vmgenid_class_init, > > > +}; > > > + > > > +static void vmgenid_register_types(void) > > > +{ > > > + type_register_static(&vmgenid_device_info); > > > +} > > > + > > > +type_init(vmgenid_register_types) > > > diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h > > > new file mode 100644 > > > index 0000000..b90882c > > > --- /dev/null > > > +++ b/include/hw/misc/vmgenid.h > > > @@ -0,0 +1,27 @@ > > > +/* > > > + * Virtual Machine Generation ID Device > > > + * > > > + * Copyright (C) 2016 Red Hat Inc. > > > + * > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > + * Igor Mammedov <imammedo@redhat.com> > > > + * > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > + * See the COPYING file in the top-level directory. > > > + * > > > + */ > > > + > > > +#ifndef HW_MISC_VMGENID_H > > > +#define HW_MISC_VMGENID_H > > > + > > > +#include "qom/object.h" > > > + > > > +#define VMGENID_DEVICE "vmgenid" > > > +#define VMGENID_GUID "guid" > > > +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" > > > +#define VMGENID_VMGID_BUF_SIZE 0x1000 > > > +#define VMGENID_VMGID_BUF_BAR 0 > > > + > > > +Object *find_vmgneid_dev(Error **errp); > > > + > > > +#endif > > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > > > index dedf277..f4c9d48 100644 > > > --- a/include/hw/pci/pci.h > > > +++ b/include/hw/pci/pci.h > > > @@ -94,6 +94,7 @@ > > > #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 > > > #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a > > > #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b > > > +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c > > > #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 > > > > > > #define FMT_PCIBUS PRIx64 > > > -- > > > 1.8.3.1
On 01/28/16 12:13, Michael S. Tsirkin wrote: > On Thu, Jan 28, 2016 at 11:54:25AM +0100, Igor Mammedov wrote: >> Based on Microsoft's specifications (paper can be >> downloaded from http://go.microsoft.com/fwlink/?LinkId=260709, >> easily found by "Virtual Machine Generation ID" keywords), >> add a PCI device with corresponding description in >> SSDT ACPI table. >> >> The GUID is set using "vmgenid.guid" property or >> a corresponding HMP/QMP command. >> >> Example of using vmgenid device: >> -device vmgenid,id=FOO,guid="324e6eaf-d1d1-4bf6-bf41-b9bb6c91fb87" >> >> 'vmgenid' device initialization flow is as following: >> 1. vmgenid has RAM BAR registered with size of GUID buffer >> 2. BIOS initializes PCI devices and it maps BAR in PCI hole >> 3. BIOS reads ACPI tables from QEMU, at that moment tables >> are generated with \_SB.VMGI.ADDR constant pointing to >> GPA where BIOS's mapped vmgenid's BAR earlier >> >> Note: >> This implementation uses PCI class 0x0500 code for vmgenid device, >> that is marked as NO_DRV in Windows's machine.inf. >> Testing various Windows versions showed that, OS >> doesn't touch nor checks for resource conflicts >> for such PCI devices. >> There was concern that during PCI rebalancing, OS >> could reprogram the BAR at other place, which would >> leave VGEN.ADDR pointing to the old (no more valid) >> address. >> However testing showed that Windows does rebalancing >> only for PCI device that have a driver attached >> and completely ignores NO_DRV class of devices. >> Which in turn creates a problem where OS could remap >> one of PCI devices(with driver) over BAR used by >> a driver-less PCI device. >> Statically declaring used memory range as VGEN._CRS >> makes OS to honor resource reservation and an ignored >> BAR range is not longer touched during PCI rebalancing. >> >> Signed-off-by: Gal Hammer <ghammer@redhat.com> >> Signed-off-by: Igor Mammedov <imammedo@redhat.com> > > It's an interesting hack, but this needs some thought. BIOS has no idea > this BAR is special and can not be rebalanced, so it might put the BAR > in the middle of the range, in effect fragmenting it. > > Really I think something like V12 just rewritten using the new APIs > (probably with something like build_append_named_dword that I suggested) > would be much a simpler way to implement this device, given > the weird API limitations. > > And hey, if you want to use a pci device to pass the physical > address guest to host, instead of reserving > a couple of IO addresses, sure, stick it in pci config in > a vendor-specific capability, this way it'll get migrated > automatically. Looks like the bottleneck for this feature is that we can't agree on the design. I thought that the ACPI approach was a nice idea; the DataTableRegion trick made me enthusiastic. Since DataTableRegion won't work, I feel that the ACPI approach has devolved into too many layers of manual indirections. (If I recall right, I posted one of the recent updates on that, with some diagrams). I think that this investigation has been beaten to death; now we should just do the simplest thing that works. I think Igor's PCI approach is simpler, and may be more easily debuggable, than the one ACPI approach that could *actually* be made work. I believe it wasn't my original priority about this feature, but at v19, simplicity of design & implementation looks like the most important trait. At this point even the fixed MMIO range near the LAPIC range (whose relocation we don't support anyway, BTW) could be preferable. Anyway, I've said my piece, I'm outta here. I hope you guys can reach an agreement. Thanks Laszlo > > >> --- >> changes since 17: >> - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> >> - make BAR prefetchable to make region cached as per MS spec >> - s/uuid/guid/ to match spec >> changes since 14: >> - reserve BAR resources so that Windows won't touch it >> during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> >> - ACPI: split VGEN device of PCI device descriptor >> and place it at PCI0 scope, so that won't be need trace its >> location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> >> - permit only one vmgenid to be created >> - enable BAR be mapped above 4Gb if it can't be mapped at low mem >> --- >> default-configs/i386-softmmu.mak | 1 + >> default-configs/x86_64-softmmu.mak | 1 + >> docs/specs/pci-ids.txt | 1 + >> hw/i386/acpi-build.c | 56 +++++++++++++- >> hw/misc/Makefile.objs | 1 + >> hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ >> include/hw/misc/vmgenid.h | 27 +++++++ >> include/hw/pci/pci.h | 1 + >> 8 files changed, 240 insertions(+), 2 deletions(-) >> create mode 100644 hw/misc/vmgenid.c >> create mode 100644 include/hw/misc/vmgenid.h >> >> diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak >> index b177e52..6402439 100644 >> --- a/default-configs/i386-softmmu.mak >> +++ b/default-configs/i386-softmmu.mak >> @@ -51,6 +51,7 @@ CONFIG_APIC=y >> CONFIG_IOAPIC=y >> CONFIG_PVPANIC=y >> CONFIG_MEM_HOTPLUG=y >> +CONFIG_VMGENID=y >> CONFIG_NVDIMM=y >> CONFIG_ACPI_NVDIMM=y >> CONFIG_XIO3130=y >> diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak >> index 6e3b312..fdac18f 100644 >> --- a/default-configs/x86_64-softmmu.mak >> +++ b/default-configs/x86_64-softmmu.mak >> @@ -51,6 +51,7 @@ CONFIG_APIC=y >> CONFIG_IOAPIC=y >> CONFIG_PVPANIC=y >> CONFIG_MEM_HOTPLUG=y >> +CONFIG_VMGENID=y >> CONFIG_NVDIMM=y >> CONFIG_ACPI_NVDIMM=y >> CONFIG_XIO3130=y >> diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt >> index 0adcb89..e65ecf9 100644 >> --- a/docs/specs/pci-ids.txt >> +++ b/docs/specs/pci-ids.txt >> @@ -47,6 +47,7 @@ PCI devices (other than virtio): >> 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) >> 1b36:0006 PCI Rocker Ethernet switch device >> 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) >> +1b36:0009 PCI VM-Generation device >> 1b36:000a PCI-PCI bridge (multiseat) >> >> All these devices are documented in docs/specs. >> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c >> index 78758e2..0187262 100644 >> --- a/hw/i386/acpi-build.c >> +++ b/hw/i386/acpi-build.c >> @@ -44,6 +44,7 @@ >> #include "hw/acpi/tpm.h" >> #include "sysemu/tpm_backend.h" >> #include "hw/timer/mc146818rtc_regs.h" >> +#include "hw/misc/vmgenid.h" >> >> /* Supported chipsets: */ >> #include "hw/acpi/piix4.h" >> @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) >> info->applesmc_io_base = applesmc_port(); >> } >> >> +static Aml *build_vmgenid_device(uint64_t buf_paddr) >> +{ >> + Aml *dev, *pkg, *crs; >> + >> + dev = aml_device("VGEN"); >> + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); >> + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); >> + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); >> + >> + pkg = aml_package(2); >> + /* low 32 bits of UUID buffer addr */ >> + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); >> + /* high 32 bits of UUID buffer addr */ >> + aml_append(pkg, aml_int(buf_paddr >> 32)); >> + aml_append(dev, aml_name_decl("ADDR", pkg)); >> + >> + /* >> + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows >> + * displays it as "PCI RAM controller" which is marked as NO_DRV >> + * so Windows ignores VMGEN device completely and doesn't check >> + * for resource conflicts which during PCI rebalancing can lead >> + * to another PCI device claiming ignored BARs. To prevent this >> + * statically reserve resources used by VM_Gen_Counter. >> + * For more verbose comment see this commit message. > > What does "this commit message" mean? > >> + */ >> + crs = aml_resource_template(); >> + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, >> + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, >> + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, >> + VMGENID_VMGID_BUF_SIZE)); >> + aml_append(dev, aml_name_decl("_CRS", crs)); >> + return dev; >> +} >> + >> /* >> * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. >> * On i386 arch we only have two pci hosts, so we can look only for them. >> @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, >> } >> >> if (bus) { >> + Object *vmgen; >> Aml *scope = aml_scope("PCI0"); >> /* Scan all PCI buses. Generate tables to support hotplug. */ >> build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); >> @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, >> aml_append(scope, dev); >> } >> >> + vmgen = find_vmgneid_dev(NULL); >> + if (vmgen) { >> + PCIDevice *pdev = PCI_DEVICE(vmgen); >> + uint64_t buf_paddr = >> + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); >> + >> + if (buf_paddr != PCI_BAR_UNMAPPED) { >> + aml_append(scope, build_vmgenid_device(buf_paddr)); >> + >> + method = aml_method("\\_GPE._E00", 0, >> + AML_NOTSERIALIZED); >> + aml_append(method, >> + aml_notify(aml_name("\\_SB.PCI0.VGEN"), >> + aml_int(0x80))); >> + aml_append(ssdt, method); >> + } >> + } >> + >> aml_append(sb_scope, scope); >> } >> } >> @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, >> { >> aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); >> >> - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); >> - >> if (misc->is_piix4) { >> method = aml_method("_E01", 0, AML_NOTSERIALIZED); >> aml_append(method, >> diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs >> index d4765c2..1f05edd 100644 >> --- a/hw/misc/Makefile.objs >> +++ b/hw/misc/Makefile.objs >> @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o >> >> obj-$(CONFIG_PVPANIC) += pvpanic.o >> obj-$(CONFIG_EDU) += edu.o >> +obj-$(CONFIG_VMGENID) += vmgenid.o >> obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o >> diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c >> new file mode 100644 >> index 0000000..a2fbdfc >> --- /dev/null >> +++ b/hw/misc/vmgenid.c >> @@ -0,0 +1,154 @@ >> +/* >> + * Virtual Machine Generation ID Device >> + * >> + * Copyright (C) 2016 Red Hat Inc. >> + * >> + * Authors: Gal Hammer <ghammer@redhat.com> >> + * Igor Mammedov <imammedo@redhat.com> >> + * >> + * This work is licensed under the terms of the GNU GPL, version 2 or later. >> + * See the COPYING file in the top-level directory. >> + * >> + */ >> + >> +#include "hw/i386/pc.h" >> +#include "hw/pci/pci.h" >> +#include "hw/misc/vmgenid.h" >> +#include "hw/acpi/acpi.h" >> +#include "qapi/visitor.h" >> + >> +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) >> + >> +typedef struct VmGenIdState { >> + PCIDevice parent_obj; >> + MemoryRegion iomem; >> + union { >> + uint8_t guid[16]; >> + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; >> + }; >> + bool guid_set; >> +} VmGenIdState; >> + >> +Object *find_vmgneid_dev(Error **errp) >> +{ >> + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); >> + if (!obj) { >> + error_setg(errp, VMGENID_DEVICE " is not found"); >> + } >> + return obj; >> +} >> + >> +static void vmgenid_update_guest(VmGenIdState *s) >> +{ >> + Object *acpi_obj; >> + void *ptr = memory_region_get_ram_ptr(&s->iomem); >> + >> + memcpy(ptr, &s->guid, sizeof(s->guid)); >> + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); >> + >> + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); >> + if (acpi_obj) { >> + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); >> + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); >> + ACPIREGS *acpi_regs = adevc->regs(adev); >> + >> + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ >> + acpi_update_sci(acpi_regs, adevc->sci(adev)); >> + } >> +} >> + >> +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) >> +{ >> + VmGenIdState *s = VMGENID(obj); >> + >> + if (qemu_uuid_parse(value, s->guid) < 0) { >> + error_setg(errp, "'%s." VMGENID_GUID >> + "': Failed to parse GUID string: %s", >> + object_get_typename(OBJECT(s)), >> + value); >> + return; >> + } >> + >> + s->guid_set = true; >> + vmgenid_update_guest(s); >> +} >> + >> +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, >> + const char *name, Error **errp) >> +{ >> + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); >> + >> + if (value == PCI_BAR_UNMAPPED) { >> + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", >> + object_get_typename(OBJECT(obj))); >> + return; >> + } >> + visit_type_int(v, &value, name, errp); >> +} >> + >> +static void vmgenid_initfn(Object *obj) >> +{ >> + VmGenIdState *s = VMGENID(obj); >> + >> + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), >> + &error_abort); >> + >> + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); >> + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", >> + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); >> +} >> + >> + >> +static void vmgenid_realize(PCIDevice *dev, Error **errp) >> +{ >> + VmGenIdState *s = VMGENID(dev); >> + bool ambiguous = false; >> + >> + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); >> + if (ambiguous) { >> + error_setg(errp, "no more than one " VMGENID_DEVICE >> + " device is permitted"); >> + return; >> + } >> + >> + if (!s->guid_set) { >> + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", >> + object_get_typename(OBJECT(s))); >> + return; >> + } >> + >> + vmstate_register_ram(&s->iomem, DEVICE(s)); >> + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, >> + PCI_BASE_ADDRESS_MEM_PREFETCH | >> + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, >> + &s->iomem); >> + return; >> +} >> + >> +static void vmgenid_class_init(ObjectClass *klass, void *data) >> +{ >> + DeviceClass *dc = DEVICE_CLASS(klass); >> + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); >> + >> + set_bit(DEVICE_CATEGORY_MISC, dc->categories); >> + dc->hotpluggable = false; >> + k->realize = vmgenid_realize; >> + k->vendor_id = PCI_VENDOR_ID_REDHAT; >> + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; >> + k->class_id = PCI_CLASS_MEMORY_RAM; >> +} >> + >> +static const TypeInfo vmgenid_device_info = { >> + .name = VMGENID_DEVICE, >> + .parent = TYPE_PCI_DEVICE, >> + .instance_size = sizeof(VmGenIdState), >> + .instance_init = vmgenid_initfn, >> + .class_init = vmgenid_class_init, >> +}; >> + >> +static void vmgenid_register_types(void) >> +{ >> + type_register_static(&vmgenid_device_info); >> +} >> + >> +type_init(vmgenid_register_types) >> diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h >> new file mode 100644 >> index 0000000..b90882c >> --- /dev/null >> +++ b/include/hw/misc/vmgenid.h >> @@ -0,0 +1,27 @@ >> +/* >> + * Virtual Machine Generation ID Device >> + * >> + * Copyright (C) 2016 Red Hat Inc. >> + * >> + * Authors: Gal Hammer <ghammer@redhat.com> >> + * Igor Mammedov <imammedo@redhat.com> >> + * >> + * This work is licensed under the terms of the GNU GPL, version 2 or later. >> + * See the COPYING file in the top-level directory. >> + * >> + */ >> + >> +#ifndef HW_MISC_VMGENID_H >> +#define HW_MISC_VMGENID_H >> + >> +#include "qom/object.h" >> + >> +#define VMGENID_DEVICE "vmgenid" >> +#define VMGENID_GUID "guid" >> +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" >> +#define VMGENID_VMGID_BUF_SIZE 0x1000 >> +#define VMGENID_VMGID_BUF_BAR 0 >> + >> +Object *find_vmgneid_dev(Error **errp); >> + >> +#endif >> diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h >> index dedf277..f4c9d48 100644 >> --- a/include/hw/pci/pci.h >> +++ b/include/hw/pci/pci.h >> @@ -94,6 +94,7 @@ >> #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 >> #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a >> #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b >> +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c >> #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 >> >> #define FMT_PCIBUS PRIx64 >> -- >> 1.8.3.1
On Thu, 28 Jan 2016 14:59:25 +0200 "Michael S. Tsirkin" <mst@redhat.com> wrote: > On Thu, Jan 28, 2016 at 01:03:16PM +0100, Igor Mammedov wrote: > > On Thu, 28 Jan 2016 13:13:04 +0200 > > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > > > On Thu, Jan 28, 2016 at 11:54:25AM +0100, Igor Mammedov wrote: > > > > Based on Microsoft's specifications (paper can be > > > > downloaded from http://go.microsoft.com/fwlink/?LinkId=260709, > > > > easily found by "Virtual Machine Generation ID" keywords), > > > > add a PCI device with corresponding description in > > > > SSDT ACPI table. > > > > > > > > The GUID is set using "vmgenid.guid" property or > > > > a corresponding HMP/QMP command. > > > > > > > > Example of using vmgenid device: > > > > -device vmgenid,id=FOO,guid="324e6eaf-d1d1-4bf6-bf41-b9bb6c91fb87" > > > > > > > > 'vmgenid' device initialization flow is as following: > > > > 1. vmgenid has RAM BAR registered with size of GUID buffer > > > > 2. BIOS initializes PCI devices and it maps BAR in PCI hole > > > > 3. BIOS reads ACPI tables from QEMU, at that moment tables > > > > are generated with \_SB.VMGI.ADDR constant pointing to > > > > GPA where BIOS's mapped vmgenid's BAR earlier > > > > > > > > Note: > > > > This implementation uses PCI class 0x0500 code for vmgenid device, > > > > that is marked as NO_DRV in Windows's machine.inf. > > > > Testing various Windows versions showed that, OS > > > > doesn't touch nor checks for resource conflicts > > > > for such PCI devices. > > > > There was concern that during PCI rebalancing, OS > > > > could reprogram the BAR at other place, which would > > > > leave VGEN.ADDR pointing to the old (no more valid) > > > > address. > > > > However testing showed that Windows does rebalancing > > > > only for PCI device that have a driver attached > > > > and completely ignores NO_DRV class of devices. > > > > Which in turn creates a problem where OS could remap > > > > one of PCI devices(with driver) over BAR used by > > > > a driver-less PCI device. > > > > Statically declaring used memory range as VGEN._CRS > > > > makes OS to honor resource reservation and an ignored > > > > BAR range is not longer touched during PCI rebalancing. > > > > > > > > Signed-off-by: Gal Hammer <ghammer@redhat.com> > > > > Signed-off-by: Igor Mammedov <imammedo@redhat.com> > > > > > > It's an interesting hack, but this needs some thought. BIOS has no idea > > > this BAR is special and can not be rebalanced, so it might put the BAR > > > in the middle of the range, in effect fragmenting it. > > yep that's the only drawback in PCI approach. > > > > > Really I think something like V12 just rewritten using the new APIs > > > (probably with something like build_append_named_dword that I suggested) > > > would be much a simpler way to implement this device, given > > > the weird API limitations. > > We went over stating drawbacks of both approaches several times > > and that's where I strongly disagree with using v12 AML patching > > approach for reasons stated in those discussions. > > Yes, IIRC you dislike the need to allocate an IO range to pass address > to host, and to have costom code to migrate the address. allocating IO ports is fine by me but I'm against using bios_linker (ACPI) approach for task at hand, let me enumerate one more time the issues that make me dislike it so much (in order where most disliked ones go the first): 1. over-engineered for the task at hand, for device to become initialized guest OS has to execute AML, so init chain looks like: QEMU -> BIOS (patch AML) -> OS (AML write buf address to IO port) -> QEMU (update buf address) it's hell to debug when something doesn't work right in this chain even if there isn't any memory corruption that incorrect AML patching could introduce. As result of complexity patches are hard to review since one has to remember/relearn all details how bios_linker in QEMU and BIOS works, hence chance of regression is very high. Dynamically patched AML also introduces its own share of AML code that has to deal with dynamic buff address value. For an example: "nvdimm acpi: add _CRS" https://patchwork.ozlabs.org/patch/566697/ 27 liner patch could be just 5-6 lines if static (known in advance) buffer address were used to declare static _CRS variable. 2. ACPI approach consumes guest usable RAM to allocate buffer and then makes device to DMA data in that RAM. That's a design point I don't agree with. Just compare with a graphics card design, where on device memory is mapped directly at some GPA not wasting RAM that guest could use for other tasks. VMGENID and NVDIMM use-cases look to me exactly the same, i.e. instead of consuming guest's RAM they should be mapped at some GPA and their memory accessed directly. In that case NVDIMM could even map whole label area and significantly simplify QEMU<->OSPM protocol that currently serializes that data through a 4K page. There is also performance issue with buffer allocated in RAM, because DMA adds unnecessary copying step when data could be read/written directly of NVDIMM. It might be no very important for _DSM interface but when it comes to supporting block mode it can become an issue. Above points make ACPI patching approach not robust and fragile and hard to maintain. > > > > And hey, if you want to use a pci device to pass the physical > > > address guest to host, instead of reserving > > > a couple of IO addresses, sure, stick it in pci config in > > > a vendor-specific capability, this way it'll get migrated > > > automatically. > > Could you elaborate more on this suggestion? > > I really just mean using PCI_Config operation region. > If you wish, I'll try to post a prototype next week. I don't know much about PCI but it would be interesting, perhaps we could use it somewhere else. However it should be checked if it works with Windows, for example PCI specific _DSM method is ignored by it if PCI device doesn't have working PCI driver bound to it. > > > > > > > > > > > --- > > > > changes since 17: > > > > - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> > > > > - make BAR prefetchable to make region cached as per MS spec > > > > - s/uuid/guid/ to match spec > > > > changes since 14: > > > > - reserve BAR resources so that Windows won't touch it > > > > during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> > > > > - ACPI: split VGEN device of PCI device descriptor > > > > and place it at PCI0 scope, so that won't be need trace its > > > > location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> > > > > - permit only one vmgenid to be created > > > > - enable BAR be mapped above 4Gb if it can't be mapped at low mem > > > > --- > > > > default-configs/i386-softmmu.mak | 1 + > > > > default-configs/x86_64-softmmu.mak | 1 + > > > > docs/specs/pci-ids.txt | 1 + > > > > hw/i386/acpi-build.c | 56 +++++++++++++- > > > > hw/misc/Makefile.objs | 1 + > > > > hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ > > > > include/hw/misc/vmgenid.h | 27 +++++++ > > > > include/hw/pci/pci.h | 1 + > > > > 8 files changed, 240 insertions(+), 2 deletions(-) > > > > create mode 100644 hw/misc/vmgenid.c > > > > create mode 100644 include/hw/misc/vmgenid.h > > > > > > > > diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak > > > > index b177e52..6402439 100644 > > > > --- a/default-configs/i386-softmmu.mak > > > > +++ b/default-configs/i386-softmmu.mak > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > CONFIG_IOAPIC=y > > > > CONFIG_PVPANIC=y > > > > CONFIG_MEM_HOTPLUG=y > > > > +CONFIG_VMGENID=y > > > > CONFIG_NVDIMM=y > > > > CONFIG_ACPI_NVDIMM=y > > > > CONFIG_XIO3130=y > > > > diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak > > > > index 6e3b312..fdac18f 100644 > > > > --- a/default-configs/x86_64-softmmu.mak > > > > +++ b/default-configs/x86_64-softmmu.mak > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > CONFIG_IOAPIC=y > > > > CONFIG_PVPANIC=y > > > > CONFIG_MEM_HOTPLUG=y > > > > +CONFIG_VMGENID=y > > > > CONFIG_NVDIMM=y > > > > CONFIG_ACPI_NVDIMM=y > > > > CONFIG_XIO3130=y > > > > diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt > > > > index 0adcb89..e65ecf9 100644 > > > > --- a/docs/specs/pci-ids.txt > > > > +++ b/docs/specs/pci-ids.txt > > > > @@ -47,6 +47,7 @@ PCI devices (other than virtio): > > > > 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) > > > > 1b36:0006 PCI Rocker Ethernet switch device > > > > 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) > > > > +1b36:0009 PCI VM-Generation device > > > > 1b36:000a PCI-PCI bridge (multiseat) > > > > > > > > All these devices are documented in docs/specs. > > > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c > > > > index 78758e2..0187262 100644 > > > > --- a/hw/i386/acpi-build.c > > > > +++ b/hw/i386/acpi-build.c > > > > @@ -44,6 +44,7 @@ > > > > #include "hw/acpi/tpm.h" > > > > #include "sysemu/tpm_backend.h" > > > > #include "hw/timer/mc146818rtc_regs.h" > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > /* Supported chipsets: */ > > > > #include "hw/acpi/piix4.h" > > > > @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) > > > > info->applesmc_io_base = applesmc_port(); > > > > } > > > > > > > > +static Aml *build_vmgenid_device(uint64_t buf_paddr) > > > > +{ > > > > + Aml *dev, *pkg, *crs; > > > > + > > > > + dev = aml_device("VGEN"); > > > > + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); > > > > + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); > > > > + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); > > > > + > > > > + pkg = aml_package(2); > > > > + /* low 32 bits of UUID buffer addr */ > > > > + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); > > > > + /* high 32 bits of UUID buffer addr */ > > > > + aml_append(pkg, aml_int(buf_paddr >> 32)); > > > > + aml_append(dev, aml_name_decl("ADDR", pkg)); > > > > + > > > > + /* > > > > + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows > > > > + * displays it as "PCI RAM controller" which is marked as NO_DRV > > > > + * so Windows ignores VMGEN device completely and doesn't check > > > > + * for resource conflicts which during PCI rebalancing can lead > > > > + * to another PCI device claiming ignored BARs. To prevent this > > > > + * statically reserve resources used by VM_Gen_Counter. > > > > + * For more verbose comment see this commit message. > > > > > > What does "this commit message" mean? > > above commit message. Should I reword it to just 'see commit message' > > > > > > > > > + */ > > > > + crs = aml_resource_template(); > > > > + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, > > > > + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, > > > > + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, > > > > + VMGENID_VMGID_BUF_SIZE)); > > > > + aml_append(dev, aml_name_decl("_CRS", crs)); > > > > + return dev; > > > > +} > > > > + > > > > /* > > > > * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. > > > > * On i386 arch we only have two pci hosts, so we can look only for them. > > > > @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > } > > > > > > > > if (bus) { > > > > + Object *vmgen; > > > > Aml *scope = aml_scope("PCI0"); > > > > /* Scan all PCI buses. Generate tables to support hotplug. */ > > > > build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); > > > > @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > aml_append(scope, dev); > > > > } > > > > > > > > + vmgen = find_vmgneid_dev(NULL); > > > > + if (vmgen) { > > > > + PCIDevice *pdev = PCI_DEVICE(vmgen); > > > > + uint64_t buf_paddr = > > > > + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); > > > > + > > > > + if (buf_paddr != PCI_BAR_UNMAPPED) { > > > > + aml_append(scope, build_vmgenid_device(buf_paddr)); > > > > + > > > > + method = aml_method("\\_GPE._E00", 0, > > > > + AML_NOTSERIALIZED); > > > > + aml_append(method, > > > > + aml_notify(aml_name("\\_SB.PCI0.VGEN"), > > > > + aml_int(0x80))); > > > > + aml_append(ssdt, method); > > > > + } > > > > + } > > > > + > > > > aml_append(sb_scope, scope); > > > > } > > > > } > > > > @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, > > > > { > > > > aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); > > > > > > > > - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); > > > > - > > > > if (misc->is_piix4) { > > > > method = aml_method("_E01", 0, AML_NOTSERIALIZED); > > > > aml_append(method, > > > > diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs > > > > index d4765c2..1f05edd 100644 > > > > --- a/hw/misc/Makefile.objs > > > > +++ b/hw/misc/Makefile.objs > > > > @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o > > > > > > > > obj-$(CONFIG_PVPANIC) += pvpanic.o > > > > obj-$(CONFIG_EDU) += edu.o > > > > +obj-$(CONFIG_VMGENID) += vmgenid.o > > > > obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o > > > > diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c > > > > new file mode 100644 > > > > index 0000000..a2fbdfc > > > > --- /dev/null > > > > +++ b/hw/misc/vmgenid.c > > > > @@ -0,0 +1,154 @@ > > > > +/* > > > > + * Virtual Machine Generation ID Device > > > > + * > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > + * > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > + * > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > + * See the COPYING file in the top-level directory. > > > > + * > > > > + */ > > > > + > > > > +#include "hw/i386/pc.h" > > > > +#include "hw/pci/pci.h" > > > > +#include "hw/misc/vmgenid.h" > > > > +#include "hw/acpi/acpi.h" > > > > +#include "qapi/visitor.h" > > > > + > > > > +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) > > > > + > > > > +typedef struct VmGenIdState { > > > > + PCIDevice parent_obj; > > > > + MemoryRegion iomem; > > > > + union { > > > > + uint8_t guid[16]; > > > > + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; > > > > + }; > > > > + bool guid_set; > > > > +} VmGenIdState; > > > > + > > > > +Object *find_vmgneid_dev(Error **errp) > > > > +{ > > > > + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); > > > > + if (!obj) { > > > > + error_setg(errp, VMGENID_DEVICE " is not found"); > > > > + } > > > > + return obj; > > > > +} > > > > + > > > > +static void vmgenid_update_guest(VmGenIdState *s) > > > > +{ > > > > + Object *acpi_obj; > > > > + void *ptr = memory_region_get_ram_ptr(&s->iomem); > > > > + > > > > + memcpy(ptr, &s->guid, sizeof(s->guid)); > > > > + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); > > > > + > > > > + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); > > > > + if (acpi_obj) { > > > > + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); > > > > + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); > > > > + ACPIREGS *acpi_regs = adevc->regs(adev); > > > > + > > > > + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ > > > > + acpi_update_sci(acpi_regs, adevc->sci(adev)); > > > > + } > > > > +} > > > > + > > > > +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) > > > > +{ > > > > + VmGenIdState *s = VMGENID(obj); > > > > + > > > > + if (qemu_uuid_parse(value, s->guid) < 0) { > > > > + error_setg(errp, "'%s." VMGENID_GUID > > > > + "': Failed to parse GUID string: %s", > > > > + object_get_typename(OBJECT(s)), > > > > + value); > > > > + return; > > > > + } > > > > + > > > > + s->guid_set = true; > > > > + vmgenid_update_guest(s); > > > > +} > > > > + > > > > +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, > > > > + const char *name, Error **errp) > > > > +{ > > > > + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); > > > > + > > > > + if (value == PCI_BAR_UNMAPPED) { > > > > + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", > > > > + object_get_typename(OBJECT(obj))); > > > > + return; > > > > + } > > > > + visit_type_int(v, &value, name, errp); > > > > +} > > > > + > > > > +static void vmgenid_initfn(Object *obj) > > > > +{ > > > > + VmGenIdState *s = VMGENID(obj); > > > > + > > > > + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), > > > > + &error_abort); > > > > + > > > > + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); > > > > + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", > > > > + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); > > > > +} > > > > + > > > > + > > > > +static void vmgenid_realize(PCIDevice *dev, Error **errp) > > > > +{ > > > > + VmGenIdState *s = VMGENID(dev); > > > > + bool ambiguous = false; > > > > + > > > > + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); > > > > + if (ambiguous) { > > > > + error_setg(errp, "no more than one " VMGENID_DEVICE > > > > + " device is permitted"); > > > > + return; > > > > + } > > > > + > > > > + if (!s->guid_set) { > > > > + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", > > > > + object_get_typename(OBJECT(s))); > > > > + return; > > > > + } > > > > + > > > > + vmstate_register_ram(&s->iomem, DEVICE(s)); > > > > + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, > > > > + PCI_BASE_ADDRESS_MEM_PREFETCH | > > > > + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > > > > + &s->iomem); > > > > + return; > > > > +} > > > > + > > > > +static void vmgenid_class_init(ObjectClass *klass, void *data) > > > > +{ > > > > + DeviceClass *dc = DEVICE_CLASS(klass); > > > > + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); > > > > + > > > > + set_bit(DEVICE_CATEGORY_MISC, dc->categories); > > > > + dc->hotpluggable = false; > > > > + k->realize = vmgenid_realize; > > > > + k->vendor_id = PCI_VENDOR_ID_REDHAT; > > > > + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; > > > > + k->class_id = PCI_CLASS_MEMORY_RAM; > > > > +} > > > > + > > > > +static const TypeInfo vmgenid_device_info = { > > > > + .name = VMGENID_DEVICE, > > > > + .parent = TYPE_PCI_DEVICE, > > > > + .instance_size = sizeof(VmGenIdState), > > > > + .instance_init = vmgenid_initfn, > > > > + .class_init = vmgenid_class_init, > > > > +}; > > > > + > > > > +static void vmgenid_register_types(void) > > > > +{ > > > > + type_register_static(&vmgenid_device_info); > > > > +} > > > > + > > > > +type_init(vmgenid_register_types) > > > > diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h > > > > new file mode 100644 > > > > index 0000000..b90882c > > > > --- /dev/null > > > > +++ b/include/hw/misc/vmgenid.h > > > > @@ -0,0 +1,27 @@ > > > > +/* > > > > + * Virtual Machine Generation ID Device > > > > + * > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > + * > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > + * > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > + * See the COPYING file in the top-level directory. > > > > + * > > > > + */ > > > > + > > > > +#ifndef HW_MISC_VMGENID_H > > > > +#define HW_MISC_VMGENID_H > > > > + > > > > +#include "qom/object.h" > > > > + > > > > +#define VMGENID_DEVICE "vmgenid" > > > > +#define VMGENID_GUID "guid" > > > > +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" > > > > +#define VMGENID_VMGID_BUF_SIZE 0x1000 > > > > +#define VMGENID_VMGID_BUF_BAR 0 > > > > + > > > > +Object *find_vmgneid_dev(Error **errp); > > > > + > > > > +#endif > > > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > > > > index dedf277..f4c9d48 100644 > > > > --- a/include/hw/pci/pci.h > > > > +++ b/include/hw/pci/pci.h > > > > @@ -94,6 +94,7 @@ > > > > #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 > > > > #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a > > > > #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b > > > > +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c > > > > #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 > > > > > > > > #define FMT_PCIBUS PRIx64 > > > > -- > > > > 1.8.3.1
On Fri, Jan 29, 2016 at 12:13:59PM +0100, Igor Mammedov wrote: > On Thu, 28 Jan 2016 14:59:25 +0200 > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > On Thu, Jan 28, 2016 at 01:03:16PM +0100, Igor Mammedov wrote: > > > On Thu, 28 Jan 2016 13:13:04 +0200 > > > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > > > > > On Thu, Jan 28, 2016 at 11:54:25AM +0100, Igor Mammedov wrote: > > > > > Based on Microsoft's specifications (paper can be > > > > > downloaded from http://go.microsoft.com/fwlink/?LinkId=260709, > > > > > easily found by "Virtual Machine Generation ID" keywords), > > > > > add a PCI device with corresponding description in > > > > > SSDT ACPI table. > > > > > > > > > > The GUID is set using "vmgenid.guid" property or > > > > > a corresponding HMP/QMP command. > > > > > > > > > > Example of using vmgenid device: > > > > > -device vmgenid,id=FOO,guid="324e6eaf-d1d1-4bf6-bf41-b9bb6c91fb87" > > > > > > > > > > 'vmgenid' device initialization flow is as following: > > > > > 1. vmgenid has RAM BAR registered with size of GUID buffer > > > > > 2. BIOS initializes PCI devices and it maps BAR in PCI hole > > > > > 3. BIOS reads ACPI tables from QEMU, at that moment tables > > > > > are generated with \_SB.VMGI.ADDR constant pointing to > > > > > GPA where BIOS's mapped vmgenid's BAR earlier > > > > > > > > > > Note: > > > > > This implementation uses PCI class 0x0500 code for vmgenid device, > > > > > that is marked as NO_DRV in Windows's machine.inf. > > > > > Testing various Windows versions showed that, OS > > > > > doesn't touch nor checks for resource conflicts > > > > > for such PCI devices. > > > > > There was concern that during PCI rebalancing, OS > > > > > could reprogram the BAR at other place, which would > > > > > leave VGEN.ADDR pointing to the old (no more valid) > > > > > address. > > > > > However testing showed that Windows does rebalancing > > > > > only for PCI device that have a driver attached > > > > > and completely ignores NO_DRV class of devices. > > > > > Which in turn creates a problem where OS could remap > > > > > one of PCI devices(with driver) over BAR used by > > > > > a driver-less PCI device. > > > > > Statically declaring used memory range as VGEN._CRS > > > > > makes OS to honor resource reservation and an ignored > > > > > BAR range is not longer touched during PCI rebalancing. > > > > > > > > > > Signed-off-by: Gal Hammer <ghammer@redhat.com> > > > > > Signed-off-by: Igor Mammedov <imammedo@redhat.com> > > > > > > > > It's an interesting hack, but this needs some thought. BIOS has no idea > > > > this BAR is special and can not be rebalanced, so it might put the BAR > > > > in the middle of the range, in effect fragmenting it. > > > yep that's the only drawback in PCI approach. > > > > > > > Really I think something like V12 just rewritten using the new APIs > > > > (probably with something like build_append_named_dword that I suggested) > > > > would be much a simpler way to implement this device, given > > > > the weird API limitations. > > > We went over stating drawbacks of both approaches several times > > > and that's where I strongly disagree with using v12 AML patching > > > approach for reasons stated in those discussions. > > > > Yes, IIRC you dislike the need to allocate an IO range to pass address > > to host, and to have costom code to migrate the address. > allocating IO ports is fine by me but I'm against using bios_linker (ACPI) > approach for task at hand, > let me enumerate one more time the issues that make me dislike it so much > (in order where most disliked ones go the first): > > 1. over-engineered for the task at hand, > for device to become initialized guest OS has to execute AML, > so init chain looks like: > QEMU -> BIOS (patch AML) -> OS (AML write buf address to IO port) -> > QEMU (update buf address) > it's hell to debug when something doesn't work right in this chain Well this is not very different from e.g. virtio. If it's just AML that worries you, we could teach BIOS/EFI a new command to give some addresses after linking back to QEMU. Would this address this issue? > even if there isn't any memory corruption that incorrect AML patching > could introduce. > As result of complexity patches are hard to review since one has > to remember/relearn all details how bios_linker in QEMU and BIOS works, > hence chance of regression is very high. > Dynamically patched AML also introduces its own share of AML > code that has to deal with dynamic buff address value. > For an example: > "nvdimm acpi: add _CRS" https://patchwork.ozlabs.org/patch/566697/ > 27 liner patch could be just 5-6 lines if static (known in advance) > buffer address were used to declare static _CRS variable. Problem is with finding a fixed address, and fragmentation that this causes. Look at the mess we have with just allocating addresses for RAM. I think it's a mistake to add to this mess. Either let's teach management to specify an address map, or let guest allocate addresses for us. > 2. ACPI approach consumes guest usable RAM to allocate buffer > and then makes device to DMA data in that RAM. > That's a design point I don't agree with. Blame the broken VM GEN ID spec. For VM GEN ID, instead of DMA we could make ACPI code read PCI BAR and copy data over, this would fix rebalancing but there is a problem with this approach: it can not be done atomically (while VM is not yet running and accessing RAM). So you can have guest read a partially corrupted ID from memory. And hey, nowdays we actually made fw cfg do DMA too. > Just compare with a graphics card design, where on device memory > is mapped directly at some GPA not wasting RAM that guest could > use for other tasks. This might have been true 20 years ago. Most modern cards do DMA. > VMGENID and NVDIMM use-cases look to me exactly the same, i.e. > instead of consuming guest's RAM they should be mapped at > some GPA and their memory accessed directly. VMGENID is tied to a spec that rather arbitrarily asks for a fixed address. This breaks the straight-forward approach of using a rebalanceable PCI BAR. > In that case NVDIMM could even map whole label area and > significantly simplify QEMU<->OSPM protocol that currently > serializes that data through a 4K page. > There is also performance issue with buffer allocated in RAM, > because DMA adds unnecessary copying step when data could > be read/written directly of NVDIMM. > It might be no very important for _DSM interface but when it > comes to supporting block mode it can become an issue. So for NVDIMM, presumably it will have code access PCI BAR properly, so it's guaranteed to work across BAR rebalancing. Would that address the performance issue? > Above points make ACPI patching approach not robust and fragile > and hard to maintain. Wrt GEN ID these are all kind of subjective though. I especially don't get what appears your general dislike of the linker host/guest interface. It's there and we are not moving away from it, so why not use it in more places? Or if you think it's wrong, why don't you build something better then? We could then maybe use it for these things as well. > > > > > > > And hey, if you want to use a pci device to pass the physical > > > > address guest to host, instead of reserving > > > > a couple of IO addresses, sure, stick it in pci config in > > > > a vendor-specific capability, this way it'll get migrated > > > > automatically. > > > Could you elaborate more on this suggestion? > > > > I really just mean using PCI_Config operation region. > > If you wish, I'll try to post a prototype next week. > I don't know much about PCI but it would be interesting, > perhaps we could use it somewhere else. > > However it should be checked if it works with Windows, > for example PCI specific _DSM method is ignored by it > if PCI device doesn't have working PCI driver bound to it. > > > > > > > > > > > > > > > > --- > > > > > changes since 17: > > > > > - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> > > > > > - make BAR prefetchable to make region cached as per MS spec > > > > > - s/uuid/guid/ to match spec > > > > > changes since 14: > > > > > - reserve BAR resources so that Windows won't touch it > > > > > during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> > > > > > - ACPI: split VGEN device of PCI device descriptor > > > > > and place it at PCI0 scope, so that won't be need trace its > > > > > location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> > > > > > - permit only one vmgenid to be created > > > > > - enable BAR be mapped above 4Gb if it can't be mapped at low mem > > > > > --- > > > > > default-configs/i386-softmmu.mak | 1 + > > > > > default-configs/x86_64-softmmu.mak | 1 + > > > > > docs/specs/pci-ids.txt | 1 + > > > > > hw/i386/acpi-build.c | 56 +++++++++++++- > > > > > hw/misc/Makefile.objs | 1 + > > > > > hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ > > > > > include/hw/misc/vmgenid.h | 27 +++++++ > > > > > include/hw/pci/pci.h | 1 + > > > > > 8 files changed, 240 insertions(+), 2 deletions(-) > > > > > create mode 100644 hw/misc/vmgenid.c > > > > > create mode 100644 include/hw/misc/vmgenid.h > > > > > > > > > > diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak > > > > > index b177e52..6402439 100644 > > > > > --- a/default-configs/i386-softmmu.mak > > > > > +++ b/default-configs/i386-softmmu.mak > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > CONFIG_IOAPIC=y > > > > > CONFIG_PVPANIC=y > > > > > CONFIG_MEM_HOTPLUG=y > > > > > +CONFIG_VMGENID=y > > > > > CONFIG_NVDIMM=y > > > > > CONFIG_ACPI_NVDIMM=y > > > > > CONFIG_XIO3130=y > > > > > diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak > > > > > index 6e3b312..fdac18f 100644 > > > > > --- a/default-configs/x86_64-softmmu.mak > > > > > +++ b/default-configs/x86_64-softmmu.mak > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > CONFIG_IOAPIC=y > > > > > CONFIG_PVPANIC=y > > > > > CONFIG_MEM_HOTPLUG=y > > > > > +CONFIG_VMGENID=y > > > > > CONFIG_NVDIMM=y > > > > > CONFIG_ACPI_NVDIMM=y > > > > > CONFIG_XIO3130=y > > > > > diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt > > > > > index 0adcb89..e65ecf9 100644 > > > > > --- a/docs/specs/pci-ids.txt > > > > > +++ b/docs/specs/pci-ids.txt > > > > > @@ -47,6 +47,7 @@ PCI devices (other than virtio): > > > > > 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) > > > > > 1b36:0006 PCI Rocker Ethernet switch device > > > > > 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) > > > > > +1b36:0009 PCI VM-Generation device > > > > > 1b36:000a PCI-PCI bridge (multiseat) > > > > > > > > > > All these devices are documented in docs/specs. > > > > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c > > > > > index 78758e2..0187262 100644 > > > > > --- a/hw/i386/acpi-build.c > > > > > +++ b/hw/i386/acpi-build.c > > > > > @@ -44,6 +44,7 @@ > > > > > #include "hw/acpi/tpm.h" > > > > > #include "sysemu/tpm_backend.h" > > > > > #include "hw/timer/mc146818rtc_regs.h" > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > /* Supported chipsets: */ > > > > > #include "hw/acpi/piix4.h" > > > > > @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) > > > > > info->applesmc_io_base = applesmc_port(); > > > > > } > > > > > > > > > > +static Aml *build_vmgenid_device(uint64_t buf_paddr) > > > > > +{ > > > > > + Aml *dev, *pkg, *crs; > > > > > + > > > > > + dev = aml_device("VGEN"); > > > > > + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); > > > > > + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); > > > > > + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); > > > > > + > > > > > + pkg = aml_package(2); > > > > > + /* low 32 bits of UUID buffer addr */ > > > > > + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); > > > > > + /* high 32 bits of UUID buffer addr */ > > > > > + aml_append(pkg, aml_int(buf_paddr >> 32)); > > > > > + aml_append(dev, aml_name_decl("ADDR", pkg)); > > > > > + > > > > > + /* > > > > > + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows > > > > > + * displays it as "PCI RAM controller" which is marked as NO_DRV > > > > > + * so Windows ignores VMGEN device completely and doesn't check > > > > > + * for resource conflicts which during PCI rebalancing can lead > > > > > + * to another PCI device claiming ignored BARs. To prevent this > > > > > + * statically reserve resources used by VM_Gen_Counter. > > > > > + * For more verbose comment see this commit message. > > > > > > > > What does "this commit message" mean? > > > above commit message. Should I reword it to just 'see commit message' > > > > > > > > > > > > + */ > > > > > + crs = aml_resource_template(); > > > > > + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, > > > > > + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, > > > > > + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, > > > > > + VMGENID_VMGID_BUF_SIZE)); > > > > > + aml_append(dev, aml_name_decl("_CRS", crs)); > > > > > + return dev; > > > > > +} > > > > > + > > > > > /* > > > > > * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. > > > > > * On i386 arch we only have two pci hosts, so we can look only for them. > > > > > @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > } > > > > > > > > > > if (bus) { > > > > > + Object *vmgen; > > > > > Aml *scope = aml_scope("PCI0"); > > > > > /* Scan all PCI buses. Generate tables to support hotplug. */ > > > > > build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); > > > > > @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > aml_append(scope, dev); > > > > > } > > > > > > > > > > + vmgen = find_vmgneid_dev(NULL); > > > > > + if (vmgen) { > > > > > + PCIDevice *pdev = PCI_DEVICE(vmgen); > > > > > + uint64_t buf_paddr = > > > > > + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); > > > > > + > > > > > + if (buf_paddr != PCI_BAR_UNMAPPED) { > > > > > + aml_append(scope, build_vmgenid_device(buf_paddr)); > > > > > + > > > > > + method = aml_method("\\_GPE._E00", 0, > > > > > + AML_NOTSERIALIZED); > > > > > + aml_append(method, > > > > > + aml_notify(aml_name("\\_SB.PCI0.VGEN"), > > > > > + aml_int(0x80))); > > > > > + aml_append(ssdt, method); > > > > > + } > > > > > + } > > > > > + > > > > > aml_append(sb_scope, scope); > > > > > } > > > > > } > > > > > @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, > > > > > { > > > > > aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); > > > > > > > > > > - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); > > > > > - > > > > > if (misc->is_piix4) { > > > > > method = aml_method("_E01", 0, AML_NOTSERIALIZED); > > > > > aml_append(method, > > > > > diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs > > > > > index d4765c2..1f05edd 100644 > > > > > --- a/hw/misc/Makefile.objs > > > > > +++ b/hw/misc/Makefile.objs > > > > > @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o > > > > > > > > > > obj-$(CONFIG_PVPANIC) += pvpanic.o > > > > > obj-$(CONFIG_EDU) += edu.o > > > > > +obj-$(CONFIG_VMGENID) += vmgenid.o > > > > > obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o > > > > > diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c > > > > > new file mode 100644 > > > > > index 0000000..a2fbdfc > > > > > --- /dev/null > > > > > +++ b/hw/misc/vmgenid.c > > > > > @@ -0,0 +1,154 @@ > > > > > +/* > > > > > + * Virtual Machine Generation ID Device > > > > > + * > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > + * > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > + * > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > + * See the COPYING file in the top-level directory. > > > > > + * > > > > > + */ > > > > > + > > > > > +#include "hw/i386/pc.h" > > > > > +#include "hw/pci/pci.h" > > > > > +#include "hw/misc/vmgenid.h" > > > > > +#include "hw/acpi/acpi.h" > > > > > +#include "qapi/visitor.h" > > > > > + > > > > > +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) > > > > > + > > > > > +typedef struct VmGenIdState { > > > > > + PCIDevice parent_obj; > > > > > + MemoryRegion iomem; > > > > > + union { > > > > > + uint8_t guid[16]; > > > > > + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; > > > > > + }; > > > > > + bool guid_set; > > > > > +} VmGenIdState; > > > > > + > > > > > +Object *find_vmgneid_dev(Error **errp) > > > > > +{ > > > > > + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); > > > > > + if (!obj) { > > > > > + error_setg(errp, VMGENID_DEVICE " is not found"); > > > > > + } > > > > > + return obj; > > > > > +} > > > > > + > > > > > +static void vmgenid_update_guest(VmGenIdState *s) > > > > > +{ > > > > > + Object *acpi_obj; > > > > > + void *ptr = memory_region_get_ram_ptr(&s->iomem); > > > > > + > > > > > + memcpy(ptr, &s->guid, sizeof(s->guid)); > > > > > + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); > > > > > + > > > > > + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); > > > > > + if (acpi_obj) { > > > > > + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); > > > > > + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); > > > > > + ACPIREGS *acpi_regs = adevc->regs(adev); > > > > > + > > > > > + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ > > > > > + acpi_update_sci(acpi_regs, adevc->sci(adev)); > > > > > + } > > > > > +} > > > > > + > > > > > +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) > > > > > +{ > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > + > > > > > + if (qemu_uuid_parse(value, s->guid) < 0) { > > > > > + error_setg(errp, "'%s." VMGENID_GUID > > > > > + "': Failed to parse GUID string: %s", > > > > > + object_get_typename(OBJECT(s)), > > > > > + value); > > > > > + return; > > > > > + } > > > > > + > > > > > + s->guid_set = true; > > > > > + vmgenid_update_guest(s); > > > > > +} > > > > > + > > > > > +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, > > > > > + const char *name, Error **errp) > > > > > +{ > > > > > + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); > > > > > + > > > > > + if (value == PCI_BAR_UNMAPPED) { > > > > > + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", > > > > > + object_get_typename(OBJECT(obj))); > > > > > + return; > > > > > + } > > > > > + visit_type_int(v, &value, name, errp); > > > > > +} > > > > > + > > > > > +static void vmgenid_initfn(Object *obj) > > > > > +{ > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > + > > > > > + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), > > > > > + &error_abort); > > > > > + > > > > > + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); > > > > > + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", > > > > > + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); > > > > > +} > > > > > + > > > > > + > > > > > +static void vmgenid_realize(PCIDevice *dev, Error **errp) > > > > > +{ > > > > > + VmGenIdState *s = VMGENID(dev); > > > > > + bool ambiguous = false; > > > > > + > > > > > + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); > > > > > + if (ambiguous) { > > > > > + error_setg(errp, "no more than one " VMGENID_DEVICE > > > > > + " device is permitted"); > > > > > + return; > > > > > + } > > > > > + > > > > > + if (!s->guid_set) { > > > > > + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", > > > > > + object_get_typename(OBJECT(s))); > > > > > + return; > > > > > + } > > > > > + > > > > > + vmstate_register_ram(&s->iomem, DEVICE(s)); > > > > > + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, > > > > > + PCI_BASE_ADDRESS_MEM_PREFETCH | > > > > > + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > > > > > + &s->iomem); > > > > > + return; > > > > > +} > > > > > + > > > > > +static void vmgenid_class_init(ObjectClass *klass, void *data) > > > > > +{ > > > > > + DeviceClass *dc = DEVICE_CLASS(klass); > > > > > + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); > > > > > + > > > > > + set_bit(DEVICE_CATEGORY_MISC, dc->categories); > > > > > + dc->hotpluggable = false; > > > > > + k->realize = vmgenid_realize; > > > > > + k->vendor_id = PCI_VENDOR_ID_REDHAT; > > > > > + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; > > > > > + k->class_id = PCI_CLASS_MEMORY_RAM; > > > > > +} > > > > > + > > > > > +static const TypeInfo vmgenid_device_info = { > > > > > + .name = VMGENID_DEVICE, > > > > > + .parent = TYPE_PCI_DEVICE, > > > > > + .instance_size = sizeof(VmGenIdState), > > > > > + .instance_init = vmgenid_initfn, > > > > > + .class_init = vmgenid_class_init, > > > > > +}; > > > > > + > > > > > +static void vmgenid_register_types(void) > > > > > +{ > > > > > + type_register_static(&vmgenid_device_info); > > > > > +} > > > > > + > > > > > +type_init(vmgenid_register_types) > > > > > diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h > > > > > new file mode 100644 > > > > > index 0000000..b90882c > > > > > --- /dev/null > > > > > +++ b/include/hw/misc/vmgenid.h > > > > > @@ -0,0 +1,27 @@ > > > > > +/* > > > > > + * Virtual Machine Generation ID Device > > > > > + * > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > + * > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > + * > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > + * See the COPYING file in the top-level directory. > > > > > + * > > > > > + */ > > > > > + > > > > > +#ifndef HW_MISC_VMGENID_H > > > > > +#define HW_MISC_VMGENID_H > > > > > + > > > > > +#include "qom/object.h" > > > > > + > > > > > +#define VMGENID_DEVICE "vmgenid" > > > > > +#define VMGENID_GUID "guid" > > > > > +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" > > > > > +#define VMGENID_VMGID_BUF_SIZE 0x1000 > > > > > +#define VMGENID_VMGID_BUF_BAR 0 > > > > > + > > > > > +Object *find_vmgneid_dev(Error **errp); > > > > > + > > > > > +#endif > > > > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > > > > > index dedf277..f4c9d48 100644 > > > > > --- a/include/hw/pci/pci.h > > > > > +++ b/include/hw/pci/pci.h > > > > > @@ -94,6 +94,7 @@ > > > > > #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 > > > > > #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a > > > > > #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b > > > > > +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c > > > > > #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 > > > > > > > > > > #define FMT_PCIBUS PRIx64 > > > > > -- > > > > > 1.8.3.1
On Sun, 31 Jan 2016 18:22:13 +0200 "Michael S. Tsirkin" <mst@redhat.com> wrote: > On Fri, Jan 29, 2016 at 12:13:59PM +0100, Igor Mammedov wrote: > > On Thu, 28 Jan 2016 14:59:25 +0200 > > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > > > On Thu, Jan 28, 2016 at 01:03:16PM +0100, Igor Mammedov wrote: > > > > On Thu, 28 Jan 2016 13:13:04 +0200 > > > > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > > > > > > > On Thu, Jan 28, 2016 at 11:54:25AM +0100, Igor Mammedov wrote: > > > > > > Based on Microsoft's specifications (paper can be > > > > > > downloaded from http://go.microsoft.com/fwlink/?LinkId=260709, > > > > > > easily found by "Virtual Machine Generation ID" keywords), > > > > > > add a PCI device with corresponding description in > > > > > > SSDT ACPI table. > > > > > > > > > > > > The GUID is set using "vmgenid.guid" property or > > > > > > a corresponding HMP/QMP command. > > > > > > > > > > > > Example of using vmgenid device: > > > > > > -device vmgenid,id=FOO,guid="324e6eaf-d1d1-4bf6-bf41-b9bb6c91fb87" > > > > > > > > > > > > 'vmgenid' device initialization flow is as following: > > > > > > 1. vmgenid has RAM BAR registered with size of GUID buffer > > > > > > 2. BIOS initializes PCI devices and it maps BAR in PCI hole > > > > > > 3. BIOS reads ACPI tables from QEMU, at that moment tables > > > > > > are generated with \_SB.VMGI.ADDR constant pointing to > > > > > > GPA where BIOS's mapped vmgenid's BAR earlier > > > > > > > > > > > > Note: > > > > > > This implementation uses PCI class 0x0500 code for vmgenid device, > > > > > > that is marked as NO_DRV in Windows's machine.inf. > > > > > > Testing various Windows versions showed that, OS > > > > > > doesn't touch nor checks for resource conflicts > > > > > > for such PCI devices. > > > > > > There was concern that during PCI rebalancing, OS > > > > > > could reprogram the BAR at other place, which would > > > > > > leave VGEN.ADDR pointing to the old (no more valid) > > > > > > address. > > > > > > However testing showed that Windows does rebalancing > > > > > > only for PCI device that have a driver attached > > > > > > and completely ignores NO_DRV class of devices. > > > > > > Which in turn creates a problem where OS could remap > > > > > > one of PCI devices(with driver) over BAR used by > > > > > > a driver-less PCI device. > > > > > > Statically declaring used memory range as VGEN._CRS > > > > > > makes OS to honor resource reservation and an ignored > > > > > > BAR range is not longer touched during PCI rebalancing. > > > > > > > > > > > > Signed-off-by: Gal Hammer <ghammer@redhat.com> > > > > > > Signed-off-by: Igor Mammedov <imammedo@redhat.com> > > > > > > > > > > It's an interesting hack, but this needs some thought. BIOS has no idea > > > > > this BAR is special and can not be rebalanced, so it might put the BAR > > > > > in the middle of the range, in effect fragmenting it. > > > > yep that's the only drawback in PCI approach. > > > > > > > > > Really I think something like V12 just rewritten using the new APIs > > > > > (probably with something like build_append_named_dword that I suggested) > > > > > would be much a simpler way to implement this device, given > > > > > the weird API limitations. > > > > We went over stating drawbacks of both approaches several times > > > > and that's where I strongly disagree with using v12 AML patching > > > > approach for reasons stated in those discussions. > > > > > > Yes, IIRC you dislike the need to allocate an IO range to pass address > > > to host, and to have costom code to migrate the address. > > allocating IO ports is fine by me but I'm against using bios_linker (ACPI) > > approach for task at hand, > > let me enumerate one more time the issues that make me dislike it so much > > (in order where most disliked ones go the first): > > > > 1. over-engineered for the task at hand, > > for device to become initialized guest OS has to execute AML, > > so init chain looks like: > > QEMU -> BIOS (patch AML) -> OS (AML write buf address to IO port) -> > > QEMU (update buf address) > > it's hell to debug when something doesn't work right in this chain > > Well this is not very different from e.g. virtio. > If it's just AML that worries you, we could teach BIOS/EFI a new command > to give some addresses after linking back to QEMU. Would this address > this issue? it would make it marginally better (especially from tests pov) though it won't fix other issues. > > > > even if there isn't any memory corruption that incorrect AML patching > > could introduce. > > As result of complexity patches are hard to review since one has > > to remember/relearn all details how bios_linker in QEMU and BIOS works, > > hence chance of regression is very high. > > Dynamically patched AML also introduces its own share of AML > > code that has to deal with dynamic buff address value. > > For an example: > > "nvdimm acpi: add _CRS" https://patchwork.ozlabs.org/patch/566697/ > > 27 liner patch could be just 5-6 lines if static (known in advance) > > buffer address were used to declare static _CRS variable. > > Problem is with finding a fixed address, and fragmentation that this > causes. Look at the mess we have with just allocating addresses for > RAM. I think it's a mistake to add to this mess. Either let's teach > management to specify an address map, or let guest allocate addresses > for us. Yep, problem here is in 'fixed' part but not in an address in general. Allowing Mgmt to specify address map partially works. See pc-dimm, on target side libvirt specifies address where it has been mapped on source. Initial RAM mess could be fixed for future machines types in similar way by replacing memory_region_allocate_system_memory() with pc-dimms, so that specific machine type could reproduce the same layout. But default addresses doesn't appear magically and have to come from somewhere, so we have to have an address allocator somewhere. If we put allocator into guest and emulate memory controller in QEMU, we probably would need to add fw_cfg interfaces that describe hardware which needs mapping (probably ever growing interface like it has been with ACPI tables, before we axed them in BIOS and moved them into QEMU). Alternatively we can put allocator in QEMU which could be simpler to implement and maintain since we won't need to implement extra fw_cfg interfaces and memory controller and ship/fix QEMU/BIOS pair in sync as it has been with ACPI in past. > > 2. ACPI approach consumes guest usable RAM to allocate buffer > > and then makes device to DMA data in that RAM. > > That's a design point I don't agree with. > > Blame the broken VM GEN ID spec. > > For VM GEN ID, instead of DMA we could make ACPI code read PCI BAR and > copy data over, this would fix rebalancing but there is a problem with > this approach: it can not be done atomically (while VM is not yet > running and accessing RAM). So you can have guest read a partially > corrupted ID from memory. Yep, VM GEN ID spec is broken and we can't do anything about it as it's absolutely impossible to guaranty atomic update as guest OS has address of the buffer and can read it any time. Nothing could be done here. > > And hey, nowdays we actually made fw cfg do DMA too. I'm not against DMA, it's direction which seems wrong to me. What we need here is a way to allocate GPA range and make sure that QEMU maps device memory there, PCI BAR is one of the ways to do it. > > Just compare with a graphics card design, where on device memory > > is mapped directly at some GPA not wasting RAM that guest could > > use for other tasks. > > This might have been true 20 years ago. Most modern cards do DMA. Modern cards, with it's own RAM, map its VRAM in address space directly and allow users use it (GEM API). So they do not waste conventional RAM. For example NVIDIA VRAM is mapped as PCI BARs the same way like in this series (even PCI class id is the same) > > VMGENID and NVDIMM use-cases look to me exactly the same, i.e. > > instead of consuming guest's RAM they should be mapped at > > some GPA and their memory accessed directly. > > VMGENID is tied to a spec that rather arbitrarily asks for a fixed > address. This breaks the straight-forward approach of using a > rebalanceable PCI BAR. For PCI rebalance to work on Windows, one has to provide working PCI driver otherwise OS will ignore it when rebalancing happens and might map something else over ignored BAR. > > > In that case NVDIMM could even map whole label area and > > significantly simplify QEMU<->OSPM protocol that currently > > serializes that data through a 4K page. > > There is also performance issue with buffer allocated in RAM, > > because DMA adds unnecessary copying step when data could > > be read/written directly of NVDIMM. > > It might be no very important for _DSM interface but when it > > comes to supporting block mode it can become an issue. > > So for NVDIMM, presumably it will have code access PCI BAR properly, so > it's guaranteed to work across BAR rebalancing. > Would that address the performance issue? it would if rebalancing were to account for driverless PCI device BARs, but it doesn't hence such BARs need to be statically pinned at place where BIOS put them at start up. I'm also not sure that PCIConfig operation region would work on Windows without loaded driver (similar to _DSM case). > > Above points make ACPI patching approach not robust and fragile > > and hard to maintain. > > Wrt GEN ID these are all kind of subjective though. I especially don't > get what appears your general dislike of the linker host/guest > interface. Besides technical issues general dislike is just what I've written "not robust and fragile" bios_linker_loader_add_pointer() interface. to make it less fragile: 1. it should be impossible to corrupt memory or patch wrong address. current impl. silently relies on value referenced by 'pointer' argument and to figure that out one has to read linker code on BIOS side. That could be easily set wrong and slip through review. API shouldn't rely on the caller setting value pointed by that argument. 2. If it's going to be used for patching AML, it should assert when bios_linker_loader_add_pointer() is called if to be patched AML object is wrong and patching would corrupt AML blob. > It's there and we are not moving away from it, so why not > use it in more places? Or if you think it's wrong, why don't you build > something better then? We could then maybe use it for these things as > well. Yep, I think for vmgenid and even more so for nvdimm it would be better to allocate GPAs in QEMU and map backing MemoryRegions directly in QEMU. For nvdimm (main data region) we already do it using pc-dimm's GPA allocation algorithm, we also could use similar approach for nvdimm's label area and vmgenid. Here is a simple attempt to add a limited GPA allocator in high memory https://patchwork.ozlabs.org/patch/540852/ But it haven't got any comment from you and were ignored. Lets consider it and perhaps we could come up with GPA allocator that could be used for other things as well. > > > > > > > > > > > And hey, if you want to use a pci device to pass the physical > > > > > address guest to host, instead of reserving > > > > > a couple of IO addresses, sure, stick it in pci config in > > > > > a vendor-specific capability, this way it'll get migrated > > > > > automatically. > > > > Could you elaborate more on this suggestion? > > > > > > I really just mean using PCI_Config operation region. > > > If you wish, I'll try to post a prototype next week. > > I don't know much about PCI but it would be interesting, > > perhaps we could use it somewhere else. > > > > However it should be checked if it works with Windows, > > for example PCI specific _DSM method is ignored by it > > if PCI device doesn't have working PCI driver bound to it. > > > > > > > > > > > > > > > > > > > > > --- > > > > > > changes since 17: > > > > > > - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> > > > > > > - make BAR prefetchable to make region cached as per MS spec > > > > > > - s/uuid/guid/ to match spec > > > > > > changes since 14: > > > > > > - reserve BAR resources so that Windows won't touch it > > > > > > during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > - ACPI: split VGEN device of PCI device descriptor > > > > > > and place it at PCI0 scope, so that won't be need trace its > > > > > > location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > - permit only one vmgenid to be created > > > > > > - enable BAR be mapped above 4Gb if it can't be mapped at low mem > > > > > > --- > > > > > > default-configs/i386-softmmu.mak | 1 + > > > > > > default-configs/x86_64-softmmu.mak | 1 + > > > > > > docs/specs/pci-ids.txt | 1 + > > > > > > hw/i386/acpi-build.c | 56 +++++++++++++- > > > > > > hw/misc/Makefile.objs | 1 + > > > > > > hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ > > > > > > include/hw/misc/vmgenid.h | 27 +++++++ > > > > > > include/hw/pci/pci.h | 1 + > > > > > > 8 files changed, 240 insertions(+), 2 deletions(-) > > > > > > create mode 100644 hw/misc/vmgenid.c > > > > > > create mode 100644 include/hw/misc/vmgenid.h > > > > > > > > > > > > diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak > > > > > > index b177e52..6402439 100644 > > > > > > --- a/default-configs/i386-softmmu.mak > > > > > > +++ b/default-configs/i386-softmmu.mak > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > CONFIG_IOAPIC=y > > > > > > CONFIG_PVPANIC=y > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > +CONFIG_VMGENID=y > > > > > > CONFIG_NVDIMM=y > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > CONFIG_XIO3130=y > > > > > > diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak > > > > > > index 6e3b312..fdac18f 100644 > > > > > > --- a/default-configs/x86_64-softmmu.mak > > > > > > +++ b/default-configs/x86_64-softmmu.mak > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > CONFIG_IOAPIC=y > > > > > > CONFIG_PVPANIC=y > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > +CONFIG_VMGENID=y > > > > > > CONFIG_NVDIMM=y > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > CONFIG_XIO3130=y > > > > > > diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt > > > > > > index 0adcb89..e65ecf9 100644 > > > > > > --- a/docs/specs/pci-ids.txt > > > > > > +++ b/docs/specs/pci-ids.txt > > > > > > @@ -47,6 +47,7 @@ PCI devices (other than virtio): > > > > > > 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) > > > > > > 1b36:0006 PCI Rocker Ethernet switch device > > > > > > 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) > > > > > > +1b36:0009 PCI VM-Generation device > > > > > > 1b36:000a PCI-PCI bridge (multiseat) > > > > > > > > > > > > All these devices are documented in docs/specs. > > > > > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c > > > > > > index 78758e2..0187262 100644 > > > > > > --- a/hw/i386/acpi-build.c > > > > > > +++ b/hw/i386/acpi-build.c > > > > > > @@ -44,6 +44,7 @@ > > > > > > #include "hw/acpi/tpm.h" > > > > > > #include "sysemu/tpm_backend.h" > > > > > > #include "hw/timer/mc146818rtc_regs.h" > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > > > /* Supported chipsets: */ > > > > > > #include "hw/acpi/piix4.h" > > > > > > @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) > > > > > > info->applesmc_io_base = applesmc_port(); > > > > > > } > > > > > > > > > > > > +static Aml *build_vmgenid_device(uint64_t buf_paddr) > > > > > > +{ > > > > > > + Aml *dev, *pkg, *crs; > > > > > > + > > > > > > + dev = aml_device("VGEN"); > > > > > > + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); > > > > > > + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); > > > > > > + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); > > > > > > + > > > > > > + pkg = aml_package(2); > > > > > > + /* low 32 bits of UUID buffer addr */ > > > > > > + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); > > > > > > + /* high 32 bits of UUID buffer addr */ > > > > > > + aml_append(pkg, aml_int(buf_paddr >> 32)); > > > > > > + aml_append(dev, aml_name_decl("ADDR", pkg)); > > > > > > + > > > > > > + /* > > > > > > + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows > > > > > > + * displays it as "PCI RAM controller" which is marked as NO_DRV > > > > > > + * so Windows ignores VMGEN device completely and doesn't check > > > > > > + * for resource conflicts which during PCI rebalancing can lead > > > > > > + * to another PCI device claiming ignored BARs. To prevent this > > > > > > + * statically reserve resources used by VM_Gen_Counter. > > > > > > + * For more verbose comment see this commit message. > > > > > > > > > > What does "this commit message" mean? > > > > above commit message. Should I reword it to just 'see commit message' > > > > > > > > > > > > > > > + */ > > > > > > + crs = aml_resource_template(); > > > > > > + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, > > > > > > + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, > > > > > > + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, > > > > > > + VMGENID_VMGID_BUF_SIZE)); > > > > > > + aml_append(dev, aml_name_decl("_CRS", crs)); > > > > > > + return dev; > > > > > > +} > > > > > > + > > > > > > /* > > > > > > * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. > > > > > > * On i386 arch we only have two pci hosts, so we can look only for them. > > > > > > @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > } > > > > > > > > > > > > if (bus) { > > > > > > + Object *vmgen; > > > > > > Aml *scope = aml_scope("PCI0"); > > > > > > /* Scan all PCI buses. Generate tables to support hotplug. */ > > > > > > build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); > > > > > > @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > aml_append(scope, dev); > > > > > > } > > > > > > > > > > > > + vmgen = find_vmgneid_dev(NULL); > > > > > > + if (vmgen) { > > > > > > + PCIDevice *pdev = PCI_DEVICE(vmgen); > > > > > > + uint64_t buf_paddr = > > > > > > + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); > > > > > > + > > > > > > + if (buf_paddr != PCI_BAR_UNMAPPED) { > > > > > > + aml_append(scope, build_vmgenid_device(buf_paddr)); > > > > > > + > > > > > > + method = aml_method("\\_GPE._E00", 0, > > > > > > + AML_NOTSERIALIZED); > > > > > > + aml_append(method, > > > > > > + aml_notify(aml_name("\\_SB.PCI0.VGEN"), > > > > > > + aml_int(0x80))); > > > > > > + aml_append(ssdt, method); > > > > > > + } > > > > > > + } > > > > > > + > > > > > > aml_append(sb_scope, scope); > > > > > > } > > > > > > } > > > > > > @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, > > > > > > { > > > > > > aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); > > > > > > > > > > > > - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); > > > > > > - > > > > > > if (misc->is_piix4) { > > > > > > method = aml_method("_E01", 0, AML_NOTSERIALIZED); > > > > > > aml_append(method, > > > > > > diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs > > > > > > index d4765c2..1f05edd 100644 > > > > > > --- a/hw/misc/Makefile.objs > > > > > > +++ b/hw/misc/Makefile.objs > > > > > > @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o > > > > > > > > > > > > obj-$(CONFIG_PVPANIC) += pvpanic.o > > > > > > obj-$(CONFIG_EDU) += edu.o > > > > > > +obj-$(CONFIG_VMGENID) += vmgenid.o > > > > > > obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o > > > > > > diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c > > > > > > new file mode 100644 > > > > > > index 0000000..a2fbdfc > > > > > > --- /dev/null > > > > > > +++ b/hw/misc/vmgenid.c > > > > > > @@ -0,0 +1,154 @@ > > > > > > +/* > > > > > > + * Virtual Machine Generation ID Device > > > > > > + * > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > + * > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > + * > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > + * See the COPYING file in the top-level directory. > > > > > > + * > > > > > > + */ > > > > > > + > > > > > > +#include "hw/i386/pc.h" > > > > > > +#include "hw/pci/pci.h" > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > +#include "hw/acpi/acpi.h" > > > > > > +#include "qapi/visitor.h" > > > > > > + > > > > > > +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) > > > > > > + > > > > > > +typedef struct VmGenIdState { > > > > > > + PCIDevice parent_obj; > > > > > > + MemoryRegion iomem; > > > > > > + union { > > > > > > + uint8_t guid[16]; > > > > > > + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; > > > > > > + }; > > > > > > + bool guid_set; > > > > > > +} VmGenIdState; > > > > > > + > > > > > > +Object *find_vmgneid_dev(Error **errp) > > > > > > +{ > > > > > > + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); > > > > > > + if (!obj) { > > > > > > + error_setg(errp, VMGENID_DEVICE " is not found"); > > > > > > + } > > > > > > + return obj; > > > > > > +} > > > > > > + > > > > > > +static void vmgenid_update_guest(VmGenIdState *s) > > > > > > +{ > > > > > > + Object *acpi_obj; > > > > > > + void *ptr = memory_region_get_ram_ptr(&s->iomem); > > > > > > + > > > > > > + memcpy(ptr, &s->guid, sizeof(s->guid)); > > > > > > + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); > > > > > > + > > > > > > + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); > > > > > > + if (acpi_obj) { > > > > > > + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); > > > > > > + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); > > > > > > + ACPIREGS *acpi_regs = adevc->regs(adev); > > > > > > + > > > > > > + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ > > > > > > + acpi_update_sci(acpi_regs, adevc->sci(adev)); > > > > > > + } > > > > > > +} > > > > > > + > > > > > > +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) > > > > > > +{ > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > + > > > > > > + if (qemu_uuid_parse(value, s->guid) < 0) { > > > > > > + error_setg(errp, "'%s." VMGENID_GUID > > > > > > + "': Failed to parse GUID string: %s", > > > > > > + object_get_typename(OBJECT(s)), > > > > > > + value); > > > > > > + return; > > > > > > + } > > > > > > + > > > > > > + s->guid_set = true; > > > > > > + vmgenid_update_guest(s); > > > > > > +} > > > > > > + > > > > > > +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, > > > > > > + const char *name, Error **errp) > > > > > > +{ > > > > > > + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); > > > > > > + > > > > > > + if (value == PCI_BAR_UNMAPPED) { > > > > > > + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", > > > > > > + object_get_typename(OBJECT(obj))); > > > > > > + return; > > > > > > + } > > > > > > + visit_type_int(v, &value, name, errp); > > > > > > +} > > > > > > + > > > > > > +static void vmgenid_initfn(Object *obj) > > > > > > +{ > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > + > > > > > > + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), > > > > > > + &error_abort); > > > > > > + > > > > > > + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); > > > > > > + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", > > > > > > + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); > > > > > > +} > > > > > > + > > > > > > + > > > > > > +static void vmgenid_realize(PCIDevice *dev, Error **errp) > > > > > > +{ > > > > > > + VmGenIdState *s = VMGENID(dev); > > > > > > + bool ambiguous = false; > > > > > > + > > > > > > + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); > > > > > > + if (ambiguous) { > > > > > > + error_setg(errp, "no more than one " VMGENID_DEVICE > > > > > > + " device is permitted"); > > > > > > + return; > > > > > > + } > > > > > > + > > > > > > + if (!s->guid_set) { > > > > > > + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", > > > > > > + object_get_typename(OBJECT(s))); > > > > > > + return; > > > > > > + } > > > > > > + > > > > > > + vmstate_register_ram(&s->iomem, DEVICE(s)); > > > > > > + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, > > > > > > + PCI_BASE_ADDRESS_MEM_PREFETCH | > > > > > > + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > > > > > > + &s->iomem); > > > > > > + return; > > > > > > +} > > > > > > + > > > > > > +static void vmgenid_class_init(ObjectClass *klass, void *data) > > > > > > +{ > > > > > > + DeviceClass *dc = DEVICE_CLASS(klass); > > > > > > + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); > > > > > > + > > > > > > + set_bit(DEVICE_CATEGORY_MISC, dc->categories); > > > > > > + dc->hotpluggable = false; > > > > > > + k->realize = vmgenid_realize; > > > > > > + k->vendor_id = PCI_VENDOR_ID_REDHAT; > > > > > > + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; > > > > > > + k->class_id = PCI_CLASS_MEMORY_RAM; > > > > > > +} > > > > > > + > > > > > > +static const TypeInfo vmgenid_device_info = { > > > > > > + .name = VMGENID_DEVICE, > > > > > > + .parent = TYPE_PCI_DEVICE, > > > > > > + .instance_size = sizeof(VmGenIdState), > > > > > > + .instance_init = vmgenid_initfn, > > > > > > + .class_init = vmgenid_class_init, > > > > > > +}; > > > > > > + > > > > > > +static void vmgenid_register_types(void) > > > > > > +{ > > > > > > + type_register_static(&vmgenid_device_info); > > > > > > +} > > > > > > + > > > > > > +type_init(vmgenid_register_types) > > > > > > diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h > > > > > > new file mode 100644 > > > > > > index 0000000..b90882c > > > > > > --- /dev/null > > > > > > +++ b/include/hw/misc/vmgenid.h > > > > > > @@ -0,0 +1,27 @@ > > > > > > +/* > > > > > > + * Virtual Machine Generation ID Device > > > > > > + * > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > + * > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > + * > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > + * See the COPYING file in the top-level directory. > > > > > > + * > > > > > > + */ > > > > > > + > > > > > > +#ifndef HW_MISC_VMGENID_H > > > > > > +#define HW_MISC_VMGENID_H > > > > > > + > > > > > > +#include "qom/object.h" > > > > > > + > > > > > > +#define VMGENID_DEVICE "vmgenid" > > > > > > +#define VMGENID_GUID "guid" > > > > > > +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" > > > > > > +#define VMGENID_VMGID_BUF_SIZE 0x1000 > > > > > > +#define VMGENID_VMGID_BUF_BAR 0 > > > > > > + > > > > > > +Object *find_vmgneid_dev(Error **errp); > > > > > > + > > > > > > +#endif > > > > > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > > > > > > index dedf277..f4c9d48 100644 > > > > > > --- a/include/hw/pci/pci.h > > > > > > +++ b/include/hw/pci/pci.h > > > > > > @@ -94,6 +94,7 @@ > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 > > > > > > #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b > > > > > > +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c > > > > > > #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 > > > > > > > > > > > > #define FMT_PCIBUS PRIx64 > > > > > > -- > > > > > > 1.8.3.1 >
On Tue, Feb 02, 2016 at 10:59:53AM +0100, Igor Mammedov wrote: > On Sun, 31 Jan 2016 18:22:13 +0200 > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > On Fri, Jan 29, 2016 at 12:13:59PM +0100, Igor Mammedov wrote: > > > On Thu, 28 Jan 2016 14:59:25 +0200 > > > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > > > > > On Thu, Jan 28, 2016 at 01:03:16PM +0100, Igor Mammedov wrote: > > > > > On Thu, 28 Jan 2016 13:13:04 +0200 > > > > > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > > > > > > > > > On Thu, Jan 28, 2016 at 11:54:25AM +0100, Igor Mammedov wrote: > > > > > > > Based on Microsoft's specifications (paper can be > > > > > > > downloaded from http://go.microsoft.com/fwlink/?LinkId=260709, > > > > > > > easily found by "Virtual Machine Generation ID" keywords), > > > > > > > add a PCI device with corresponding description in > > > > > > > SSDT ACPI table. > > > > > > > > > > > > > > The GUID is set using "vmgenid.guid" property or > > > > > > > a corresponding HMP/QMP command. > > > > > > > > > > > > > > Example of using vmgenid device: > > > > > > > -device vmgenid,id=FOO,guid="324e6eaf-d1d1-4bf6-bf41-b9bb6c91fb87" > > > > > > > > > > > > > > 'vmgenid' device initialization flow is as following: > > > > > > > 1. vmgenid has RAM BAR registered with size of GUID buffer > > > > > > > 2. BIOS initializes PCI devices and it maps BAR in PCI hole > > > > > > > 3. BIOS reads ACPI tables from QEMU, at that moment tables > > > > > > > are generated with \_SB.VMGI.ADDR constant pointing to > > > > > > > GPA where BIOS's mapped vmgenid's BAR earlier > > > > > > > > > > > > > > Note: > > > > > > > This implementation uses PCI class 0x0500 code for vmgenid device, > > > > > > > that is marked as NO_DRV in Windows's machine.inf. > > > > > > > Testing various Windows versions showed that, OS > > > > > > > doesn't touch nor checks for resource conflicts > > > > > > > for such PCI devices. > > > > > > > There was concern that during PCI rebalancing, OS > > > > > > > could reprogram the BAR at other place, which would > > > > > > > leave VGEN.ADDR pointing to the old (no more valid) > > > > > > > address. > > > > > > > However testing showed that Windows does rebalancing > > > > > > > only for PCI device that have a driver attached > > > > > > > and completely ignores NO_DRV class of devices. > > > > > > > Which in turn creates a problem where OS could remap > > > > > > > one of PCI devices(with driver) over BAR used by > > > > > > > a driver-less PCI device. > > > > > > > Statically declaring used memory range as VGEN._CRS > > > > > > > makes OS to honor resource reservation and an ignored > > > > > > > BAR range is not longer touched during PCI rebalancing. > > > > > > > > > > > > > > Signed-off-by: Gal Hammer <ghammer@redhat.com> > > > > > > > Signed-off-by: Igor Mammedov <imammedo@redhat.com> > > > > > > > > > > > > It's an interesting hack, but this needs some thought. BIOS has no idea > > > > > > this BAR is special and can not be rebalanced, so it might put the BAR > > > > > > in the middle of the range, in effect fragmenting it. > > > > > yep that's the only drawback in PCI approach. > > > > > > > > > > > Really I think something like V12 just rewritten using the new APIs > > > > > > (probably with something like build_append_named_dword that I suggested) > > > > > > would be much a simpler way to implement this device, given > > > > > > the weird API limitations. > > > > > We went over stating drawbacks of both approaches several times > > > > > and that's where I strongly disagree with using v12 AML patching > > > > > approach for reasons stated in those discussions. > > > > > > > > Yes, IIRC you dislike the need to allocate an IO range to pass address > > > > to host, and to have costom code to migrate the address. > > > allocating IO ports is fine by me but I'm against using bios_linker (ACPI) > > > approach for task at hand, > > > let me enumerate one more time the issues that make me dislike it so much > > > (in order where most disliked ones go the first): > > > > > > 1. over-engineered for the task at hand, > > > for device to become initialized guest OS has to execute AML, > > > so init chain looks like: > > > QEMU -> BIOS (patch AML) -> OS (AML write buf address to IO port) -> > > > QEMU (update buf address) > > > it's hell to debug when something doesn't work right in this chain > > > > Well this is not very different from e.g. virtio. > > If it's just AML that worries you, we could teach BIOS/EFI a new command > > to give some addresses after linking back to QEMU. Would this address > > this issue? > it would make it marginally better (especially from tests pov) > though it won't fix other issues. > > > > > > > > even if there isn't any memory corruption that incorrect AML patching > > > could introduce. > > > As result of complexity patches are hard to review since one has > > > to remember/relearn all details how bios_linker in QEMU and BIOS works, > > > hence chance of regression is very high. > > > Dynamically patched AML also introduces its own share of AML > > > code that has to deal with dynamic buff address value. > > > For an example: > > > "nvdimm acpi: add _CRS" https://patchwork.ozlabs.org/patch/566697/ > > > 27 liner patch could be just 5-6 lines if static (known in advance) > > > buffer address were used to declare static _CRS variable. > > > > Problem is with finding a fixed address, and fragmentation that this > > causes. Look at the mess we have with just allocating addresses for > > RAM. I think it's a mistake to add to this mess. Either let's teach > > management to specify an address map, or let guest allocate addresses > > for us. > > Yep, problem here is in 'fixed' part but not in an address in general. > Allowing Mgmt to specify address map partially works. See pc-dimm, > on target side libvirt specifies address where it has been mapped > on source. Initial RAM mess could be fixed for future machines types > in similar way by replacing memory_region_allocate_system_memory() > with pc-dimms, so that specific machine type could reproduce > the same layout. But this requires management to specify the complete memory map. Otherwise we won't be able to change the layout, ever, even when we change machine types. Migration is different as addresses are queried on source. Hmm, I did not realize someone might misuse this and set address manually even without migration. We need to find a way to prevent that before it's too late. Eduardo - any ideas? > But default addresses doesn't appear magically and have to > come from somewhere, so we have to have an address allocator > somewhere. I feel for advanced functionality like vm gen id, we could require all addresses to be specified. > If we put allocator into guest and emulate memory controller > in QEMU, we probably would need to add fw_cfg interfaces > that describe hardware which needs mapping (probably ever > growing interface like it has been with ACPI tables, before > we axed them in BIOS and moved them into QEMU). > Alternatively we can put allocator in QEMU which could > be simpler to implement and maintain since we won't need > to implement extra fw_cfg interfaces and memory controller > and ship/fix QEMU/BIOS pair in sync as it has been with ACPI > in past. So the linker interface solves this rather neatly: bios allocates memory, bios passes memory map to guest. Served us well for several years without need for extensions, and it does solve the VM GEN ID problem, even though 1. it was never designed for huge areas like nvdimm seems to want to use 2. we might want to add a new 64 bit flag to avoid touching low memory > > > 2. ACPI approach consumes guest usable RAM to allocate buffer > > > and then makes device to DMA data in that RAM. > > > That's a design point I don't agree with. > > > > Blame the broken VM GEN ID spec. > > > > For VM GEN ID, instead of DMA we could make ACPI code read PCI BAR and > > copy data over, this would fix rebalancing but there is a problem with > > this approach: it can not be done atomically (while VM is not yet > > running and accessing RAM). So you can have guest read a partially > > corrupted ID from memory. > > Yep, VM GEN ID spec is broken and we can't do anything about it as > it's absolutely impossible to guaranty atomic update as guest OS > has address of the buffer and can read it any time. Nothing could be > done here. Hmm I thought we can stop VM while we are changing the ID. But of course VM could be accessing it at the same time. So I take this back, ACPI code reading PCI BAR and writing data out to the buffer would be fine from this point of view. > > > > And hey, nowdays we actually made fw cfg do DMA too. > > I'm not against DMA, it's direction which seems wrong to me. > What we need here is a way to allocate GPA range and make sure > that QEMU maps device memory there, PCI BAR is one > of the ways to do it. OK fine, but returning PCI BAR address to guest is wrong. How about reading it from ACPI then? Is it really broken unless there's *also* a driver? > > > Just compare with a graphics card design, where on device memory > > > is mapped directly at some GPA not wasting RAM that guest could > > > use for other tasks. > > > > This might have been true 20 years ago. Most modern cards do DMA. > > Modern cards, with it's own RAM, map its VRAM in address space directly > and allow users use it (GEM API). So they do not waste conventional RAM. > For example NVIDIA VRAM is mapped as PCI BARs the same way like in this > series (even PCI class id is the same) Don't know enough about graphics really, I'm not sure how these are relevant. NICs and disks certainly do DMA. And virtio gl seems to mostly use guest RAM, not on card RAM. > > > VMGENID and NVDIMM use-cases look to me exactly the same, i.e. > > > instead of consuming guest's RAM they should be mapped at > > > some GPA and their memory accessed directly. > > > > VMGENID is tied to a spec that rather arbitrarily asks for a fixed > > address. This breaks the straight-forward approach of using a > > rebalanceable PCI BAR. > > For PCI rebalance to work on Windows, one has to provide working PCI driver > otherwise OS will ignore it when rebalancing happens and > might map something else over ignored BAR. Does it disable the BAR then? Or just move it elsewhere? > > > > > In that case NVDIMM could even map whole label area and > > > significantly simplify QEMU<->OSPM protocol that currently > > > serializes that data through a 4K page. > > > There is also performance issue with buffer allocated in RAM, > > > because DMA adds unnecessary copying step when data could > > > be read/written directly of NVDIMM. > > > It might be no very important for _DSM interface but when it > > > comes to supporting block mode it can become an issue. > > > > So for NVDIMM, presumably it will have code access PCI BAR properly, so > > it's guaranteed to work across BAR rebalancing. > > Would that address the performance issue? > > it would if rebalancing were to account for driverless PCI device BARs, > but it doesn't hence such BARs need to be statically pinned > at place where BIOS put them at start up. > I'm also not sure that PCIConfig operation region would work > on Windows without loaded driver (similar to _DSM case). > > > > > Above points make ACPI patching approach not robust and fragile > > > and hard to maintain. > > > > Wrt GEN ID these are all kind of subjective though. I especially don't > > get what appears your general dislike of the linker host/guest > > interface. > Besides technical issues general dislike is just what I've written > "not robust and fragile" bios_linker_loader_add_pointer() interface. > > to make it less fragile: > 1. it should be impossible to corrupt memory or patch wrong address. > current impl. silently relies on value referenced by 'pointer' argument > and to figure that out one has to read linker code on BIOS side. > That could be easily set wrong and slip through review. That's an API issue, it seemed like a good idea but I guess it confuses people. Would you be happier using an offset instead of a pointer? > API shouldn't rely on the caller setting value pointed by that argument. I couldn't parse that one. Care suggesting a cleaner API for linker? > 2. If it's going to be used for patching AML, it should assert > when bios_linker_loader_add_pointer() is called if to be patched > AML object is wrong and patching would corrupt AML blob. Hmm for example check that the patched data has the expected pattern? > > > It's there and we are not moving away from it, so why not > > use it in more places? Or if you think it's wrong, why don't you build > > something better then? We could then maybe use it for these things as > > well. > > Yep, I think for vmgenid and even more so for nvdimm > it would be better to allocate GPAs in QEMU and map backing > MemoryRegions directly in QEMU. > For nvdimm (main data region) > we already do it using pc-dimm's GPA allocation algorithm, we also > could use similar approach for nvdimm's label area and vmgenid. > > Here is a simple attempt to add a limited GPA allocator in high memory > https://patchwork.ozlabs.org/patch/540852/ > But it haven't got any comment from you and were ignored. > Lets consider it and perhaps we could come up with GPA allocator > that could be used for other things as well. For nvdimm label area, I agree passing things through a 4K buffer seems inefficient. I'm not sure what's a better way though. Use 64 bit memory? Setting aside old guests such as XP, does it break 32 bit guests? I'm really afraid of adding yet another allocator, I think you underestimate the maintainance headache: it's not theoretical and is already felt. > > > > > > > > > > > > > > > > And hey, if you want to use a pci device to pass the physical > > > > > > address guest to host, instead of reserving > > > > > > a couple of IO addresses, sure, stick it in pci config in > > > > > > a vendor-specific capability, this way it'll get migrated > > > > > > automatically. > > > > > Could you elaborate more on this suggestion? > > > > > > > > I really just mean using PCI_Config operation region. > > > > If you wish, I'll try to post a prototype next week. > > > I don't know much about PCI but it would be interesting, > > > perhaps we could use it somewhere else. > > > > > > However it should be checked if it works with Windows, > > > for example PCI specific _DSM method is ignored by it > > > if PCI device doesn't have working PCI driver bound to it. > > > > > > > > > > > > > > > > > > > > > > > > > > --- > > > > > > > changes since 17: > > > > > > > - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> > > > > > > > - make BAR prefetchable to make region cached as per MS spec > > > > > > > - s/uuid/guid/ to match spec > > > > > > > changes since 14: > > > > > > > - reserve BAR resources so that Windows won't touch it > > > > > > > during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > - ACPI: split VGEN device of PCI device descriptor > > > > > > > and place it at PCI0 scope, so that won't be need trace its > > > > > > > location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > - permit only one vmgenid to be created > > > > > > > - enable BAR be mapped above 4Gb if it can't be mapped at low mem > > > > > > > --- > > > > > > > default-configs/i386-softmmu.mak | 1 + > > > > > > > default-configs/x86_64-softmmu.mak | 1 + > > > > > > > docs/specs/pci-ids.txt | 1 + > > > > > > > hw/i386/acpi-build.c | 56 +++++++++++++- > > > > > > > hw/misc/Makefile.objs | 1 + > > > > > > > hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ > > > > > > > include/hw/misc/vmgenid.h | 27 +++++++ > > > > > > > include/hw/pci/pci.h | 1 + > > > > > > > 8 files changed, 240 insertions(+), 2 deletions(-) > > > > > > > create mode 100644 hw/misc/vmgenid.c > > > > > > > create mode 100644 include/hw/misc/vmgenid.h > > > > > > > > > > > > > > diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak > > > > > > > index b177e52..6402439 100644 > > > > > > > --- a/default-configs/i386-softmmu.mak > > > > > > > +++ b/default-configs/i386-softmmu.mak > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > CONFIG_IOAPIC=y > > > > > > > CONFIG_PVPANIC=y > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > +CONFIG_VMGENID=y > > > > > > > CONFIG_NVDIMM=y > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > CONFIG_XIO3130=y > > > > > > > diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak > > > > > > > index 6e3b312..fdac18f 100644 > > > > > > > --- a/default-configs/x86_64-softmmu.mak > > > > > > > +++ b/default-configs/x86_64-softmmu.mak > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > CONFIG_IOAPIC=y > > > > > > > CONFIG_PVPANIC=y > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > +CONFIG_VMGENID=y > > > > > > > CONFIG_NVDIMM=y > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > CONFIG_XIO3130=y > > > > > > > diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt > > > > > > > index 0adcb89..e65ecf9 100644 > > > > > > > --- a/docs/specs/pci-ids.txt > > > > > > > +++ b/docs/specs/pci-ids.txt > > > > > > > @@ -47,6 +47,7 @@ PCI devices (other than virtio): > > > > > > > 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) > > > > > > > 1b36:0006 PCI Rocker Ethernet switch device > > > > > > > 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) > > > > > > > +1b36:0009 PCI VM-Generation device > > > > > > > 1b36:000a PCI-PCI bridge (multiseat) > > > > > > > > > > > > > > All these devices are documented in docs/specs. > > > > > > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c > > > > > > > index 78758e2..0187262 100644 > > > > > > > --- a/hw/i386/acpi-build.c > > > > > > > +++ b/hw/i386/acpi-build.c > > > > > > > @@ -44,6 +44,7 @@ > > > > > > > #include "hw/acpi/tpm.h" > > > > > > > #include "sysemu/tpm_backend.h" > > > > > > > #include "hw/timer/mc146818rtc_regs.h" > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > > > > > /* Supported chipsets: */ > > > > > > > #include "hw/acpi/piix4.h" > > > > > > > @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) > > > > > > > info->applesmc_io_base = applesmc_port(); > > > > > > > } > > > > > > > > > > > > > > +static Aml *build_vmgenid_device(uint64_t buf_paddr) > > > > > > > +{ > > > > > > > + Aml *dev, *pkg, *crs; > > > > > > > + > > > > > > > + dev = aml_device("VGEN"); > > > > > > > + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); > > > > > > > + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); > > > > > > > + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); > > > > > > > + > > > > > > > + pkg = aml_package(2); > > > > > > > + /* low 32 bits of UUID buffer addr */ > > > > > > > + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); > > > > > > > + /* high 32 bits of UUID buffer addr */ > > > > > > > + aml_append(pkg, aml_int(buf_paddr >> 32)); > > > > > > > + aml_append(dev, aml_name_decl("ADDR", pkg)); > > > > > > > + > > > > > > > + /* > > > > > > > + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows > > > > > > > + * displays it as "PCI RAM controller" which is marked as NO_DRV > > > > > > > + * so Windows ignores VMGEN device completely and doesn't check > > > > > > > + * for resource conflicts which during PCI rebalancing can lead > > > > > > > + * to another PCI device claiming ignored BARs. To prevent this > > > > > > > + * statically reserve resources used by VM_Gen_Counter. > > > > > > > + * For more verbose comment see this commit message. > > > > > > > > > > > > What does "this commit message" mean? > > > > > above commit message. Should I reword it to just 'see commit message' > > > > > > > > > > > > > > > > > > + */ > > > > > > > + crs = aml_resource_template(); > > > > > > > + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, > > > > > > > + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, > > > > > > > + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, > > > > > > > + VMGENID_VMGID_BUF_SIZE)); > > > > > > > + aml_append(dev, aml_name_decl("_CRS", crs)); > > > > > > > + return dev; > > > > > > > +} > > > > > > > + > > > > > > > /* > > > > > > > * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. > > > > > > > * On i386 arch we only have two pci hosts, so we can look only for them. > > > > > > > @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > } > > > > > > > > > > > > > > if (bus) { > > > > > > > + Object *vmgen; > > > > > > > Aml *scope = aml_scope("PCI0"); > > > > > > > /* Scan all PCI buses. Generate tables to support hotplug. */ > > > > > > > build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); > > > > > > > @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > aml_append(scope, dev); > > > > > > > } > > > > > > > > > > > > > > + vmgen = find_vmgneid_dev(NULL); > > > > > > > + if (vmgen) { > > > > > > > + PCIDevice *pdev = PCI_DEVICE(vmgen); > > > > > > > + uint64_t buf_paddr = > > > > > > > + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); > > > > > > > + > > > > > > > + if (buf_paddr != PCI_BAR_UNMAPPED) { > > > > > > > + aml_append(scope, build_vmgenid_device(buf_paddr)); > > > > > > > + > > > > > > > + method = aml_method("\\_GPE._E00", 0, > > > > > > > + AML_NOTSERIALIZED); > > > > > > > + aml_append(method, > > > > > > > + aml_notify(aml_name("\\_SB.PCI0.VGEN"), > > > > > > > + aml_int(0x80))); > > > > > > > + aml_append(ssdt, method); > > > > > > > + } > > > > > > > + } > > > > > > > + > > > > > > > aml_append(sb_scope, scope); > > > > > > > } > > > > > > > } > > > > > > > @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, > > > > > > > { > > > > > > > aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); > > > > > > > > > > > > > > - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); > > > > > > > - > > > > > > > if (misc->is_piix4) { > > > > > > > method = aml_method("_E01", 0, AML_NOTSERIALIZED); > > > > > > > aml_append(method, > > > > > > > diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs > > > > > > > index d4765c2..1f05edd 100644 > > > > > > > --- a/hw/misc/Makefile.objs > > > > > > > +++ b/hw/misc/Makefile.objs > > > > > > > @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o > > > > > > > > > > > > > > obj-$(CONFIG_PVPANIC) += pvpanic.o > > > > > > > obj-$(CONFIG_EDU) += edu.o > > > > > > > +obj-$(CONFIG_VMGENID) += vmgenid.o > > > > > > > obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o > > > > > > > diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c > > > > > > > new file mode 100644 > > > > > > > index 0000000..a2fbdfc > > > > > > > --- /dev/null > > > > > > > +++ b/hw/misc/vmgenid.c > > > > > > > @@ -0,0 +1,154 @@ > > > > > > > +/* > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > + * > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > + * > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > + * > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > + * > > > > > > > + */ > > > > > > > + > > > > > > > +#include "hw/i386/pc.h" > > > > > > > +#include "hw/pci/pci.h" > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > +#include "hw/acpi/acpi.h" > > > > > > > +#include "qapi/visitor.h" > > > > > > > + > > > > > > > +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) > > > > > > > + > > > > > > > +typedef struct VmGenIdState { > > > > > > > + PCIDevice parent_obj; > > > > > > > + MemoryRegion iomem; > > > > > > > + union { > > > > > > > + uint8_t guid[16]; > > > > > > > + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; > > > > > > > + }; > > > > > > > + bool guid_set; > > > > > > > +} VmGenIdState; > > > > > > > + > > > > > > > +Object *find_vmgneid_dev(Error **errp) > > > > > > > +{ > > > > > > > + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); > > > > > > > + if (!obj) { > > > > > > > + error_setg(errp, VMGENID_DEVICE " is not found"); > > > > > > > + } > > > > > > > + return obj; > > > > > > > +} > > > > > > > + > > > > > > > +static void vmgenid_update_guest(VmGenIdState *s) > > > > > > > +{ > > > > > > > + Object *acpi_obj; > > > > > > > + void *ptr = memory_region_get_ram_ptr(&s->iomem); > > > > > > > + > > > > > > > + memcpy(ptr, &s->guid, sizeof(s->guid)); > > > > > > > + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); > > > > > > > + > > > > > > > + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); > > > > > > > + if (acpi_obj) { > > > > > > > + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); > > > > > > > + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); > > > > > > > + ACPIREGS *acpi_regs = adevc->regs(adev); > > > > > > > + > > > > > > > + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ > > > > > > > + acpi_update_sci(acpi_regs, adevc->sci(adev)); > > > > > > > + } > > > > > > > +} > > > > > > > + > > > > > > > +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) > > > > > > > +{ > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > + > > > > > > > + if (qemu_uuid_parse(value, s->guid) < 0) { > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID > > > > > > > + "': Failed to parse GUID string: %s", > > > > > > > + object_get_typename(OBJECT(s)), > > > > > > > + value); > > > > > > > + return; > > > > > > > + } > > > > > > > + > > > > > > > + s->guid_set = true; > > > > > > > + vmgenid_update_guest(s); > > > > > > > +} > > > > > > > + > > > > > > > +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, > > > > > > > + const char *name, Error **errp) > > > > > > > +{ > > > > > > > + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); > > > > > > > + > > > > > > > + if (value == PCI_BAR_UNMAPPED) { > > > > > > > + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", > > > > > > > + object_get_typename(OBJECT(obj))); > > > > > > > + return; > > > > > > > + } > > > > > > > + visit_type_int(v, &value, name, errp); > > > > > > > +} > > > > > > > + > > > > > > > +static void vmgenid_initfn(Object *obj) > > > > > > > +{ > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > + > > > > > > > + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), > > > > > > > + &error_abort); > > > > > > > + > > > > > > > + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); > > > > > > > + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", > > > > > > > + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); > > > > > > > +} > > > > > > > + > > > > > > > + > > > > > > > +static void vmgenid_realize(PCIDevice *dev, Error **errp) > > > > > > > +{ > > > > > > > + VmGenIdState *s = VMGENID(dev); > > > > > > > + bool ambiguous = false; > > > > > > > + > > > > > > > + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); > > > > > > > + if (ambiguous) { > > > > > > > + error_setg(errp, "no more than one " VMGENID_DEVICE > > > > > > > + " device is permitted"); > > > > > > > + return; > > > > > > > + } > > > > > > > + > > > > > > > + if (!s->guid_set) { > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", > > > > > > > + object_get_typename(OBJECT(s))); > > > > > > > + return; > > > > > > > + } > > > > > > > + > > > > > > > + vmstate_register_ram(&s->iomem, DEVICE(s)); > > > > > > > + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, > > > > > > > + PCI_BASE_ADDRESS_MEM_PREFETCH | > > > > > > > + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > > > > > > > + &s->iomem); > > > > > > > + return; > > > > > > > +} > > > > > > > + > > > > > > > +static void vmgenid_class_init(ObjectClass *klass, void *data) > > > > > > > +{ > > > > > > > + DeviceClass *dc = DEVICE_CLASS(klass); > > > > > > > + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); > > > > > > > + > > > > > > > + set_bit(DEVICE_CATEGORY_MISC, dc->categories); > > > > > > > + dc->hotpluggable = false; > > > > > > > + k->realize = vmgenid_realize; > > > > > > > + k->vendor_id = PCI_VENDOR_ID_REDHAT; > > > > > > > + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; > > > > > > > + k->class_id = PCI_CLASS_MEMORY_RAM; > > > > > > > +} > > > > > > > + > > > > > > > +static const TypeInfo vmgenid_device_info = { > > > > > > > + .name = VMGENID_DEVICE, > > > > > > > + .parent = TYPE_PCI_DEVICE, > > > > > > > + .instance_size = sizeof(VmGenIdState), > > > > > > > + .instance_init = vmgenid_initfn, > > > > > > > + .class_init = vmgenid_class_init, > > > > > > > +}; > > > > > > > + > > > > > > > +static void vmgenid_register_types(void) > > > > > > > +{ > > > > > > > + type_register_static(&vmgenid_device_info); > > > > > > > +} > > > > > > > + > > > > > > > +type_init(vmgenid_register_types) > > > > > > > diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h > > > > > > > new file mode 100644 > > > > > > > index 0000000..b90882c > > > > > > > --- /dev/null > > > > > > > +++ b/include/hw/misc/vmgenid.h > > > > > > > @@ -0,0 +1,27 @@ > > > > > > > +/* > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > + * > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > + * > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > + * > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > + * > > > > > > > + */ > > > > > > > + > > > > > > > +#ifndef HW_MISC_VMGENID_H > > > > > > > +#define HW_MISC_VMGENID_H > > > > > > > + > > > > > > > +#include "qom/object.h" > > > > > > > + > > > > > > > +#define VMGENID_DEVICE "vmgenid" > > > > > > > +#define VMGENID_GUID "guid" > > > > > > > +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" > > > > > > > +#define VMGENID_VMGID_BUF_SIZE 0x1000 > > > > > > > +#define VMGENID_VMGID_BUF_BAR 0 > > > > > > > + > > > > > > > +Object *find_vmgneid_dev(Error **errp); > > > > > > > + > > > > > > > +#endif > > > > > > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > > > > > > > index dedf277..f4c9d48 100644 > > > > > > > --- a/include/hw/pci/pci.h > > > > > > > +++ b/include/hw/pci/pci.h > > > > > > > @@ -94,6 +94,7 @@ > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 > > > > > > > #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b > > > > > > > +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c > > > > > > > #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 > > > > > > > > > > > > > > #define FMT_PCIBUS PRIx64 > > > > > > > -- > > > > > > > 1.8.3.1 > >
On Tue, 2 Feb 2016 13:16:38 +0200 "Michael S. Tsirkin" <mst@redhat.com> wrote: > On Tue, Feb 02, 2016 at 10:59:53AM +0100, Igor Mammedov wrote: > > On Sun, 31 Jan 2016 18:22:13 +0200 > > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > > > On Fri, Jan 29, 2016 at 12:13:59PM +0100, Igor Mammedov wrote: > > > > On Thu, 28 Jan 2016 14:59:25 +0200 > > > > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > > > > > > > On Thu, Jan 28, 2016 at 01:03:16PM +0100, Igor Mammedov wrote: > > > > > > On Thu, 28 Jan 2016 13:13:04 +0200 > > > > > > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > > > > > > > > > > > On Thu, Jan 28, 2016 at 11:54:25AM +0100, Igor Mammedov wrote: > > > > > > > > Based on Microsoft's specifications (paper can be > > > > > > > > downloaded from http://go.microsoft.com/fwlink/?LinkId=260709, > > > > > > > > easily found by "Virtual Machine Generation ID" keywords), > > > > > > > > add a PCI device with corresponding description in > > > > > > > > SSDT ACPI table. > > > > > > > > > > > > > > > > The GUID is set using "vmgenid.guid" property or > > > > > > > > a corresponding HMP/QMP command. > > > > > > > > > > > > > > > > Example of using vmgenid device: > > > > > > > > -device vmgenid,id=FOO,guid="324e6eaf-d1d1-4bf6-bf41-b9bb6c91fb87" > > > > > > > > > > > > > > > > 'vmgenid' device initialization flow is as following: > > > > > > > > 1. vmgenid has RAM BAR registered with size of GUID buffer > > > > > > > > 2. BIOS initializes PCI devices and it maps BAR in PCI hole > > > > > > > > 3. BIOS reads ACPI tables from QEMU, at that moment tables > > > > > > > > are generated with \_SB.VMGI.ADDR constant pointing to > > > > > > > > GPA where BIOS's mapped vmgenid's BAR earlier > > > > > > > > > > > > > > > > Note: > > > > > > > > This implementation uses PCI class 0x0500 code for vmgenid device, > > > > > > > > that is marked as NO_DRV in Windows's machine.inf. > > > > > > > > Testing various Windows versions showed that, OS > > > > > > > > doesn't touch nor checks for resource conflicts > > > > > > > > for such PCI devices. > > > > > > > > There was concern that during PCI rebalancing, OS > > > > > > > > could reprogram the BAR at other place, which would > > > > > > > > leave VGEN.ADDR pointing to the old (no more valid) > > > > > > > > address. > > > > > > > > However testing showed that Windows does rebalancing > > > > > > > > only for PCI device that have a driver attached > > > > > > > > and completely ignores NO_DRV class of devices. > > > > > > > > Which in turn creates a problem where OS could remap > > > > > > > > one of PCI devices(with driver) over BAR used by > > > > > > > > a driver-less PCI device. > > > > > > > > Statically declaring used memory range as VGEN._CRS > > > > > > > > makes OS to honor resource reservation and an ignored > > > > > > > > BAR range is not longer touched during PCI rebalancing. > > > > > > > > > > > > > > > > Signed-off-by: Gal Hammer <ghammer@redhat.com> > > > > > > > > Signed-off-by: Igor Mammedov <imammedo@redhat.com> > > > > > > > > > > > > > > It's an interesting hack, but this needs some thought. BIOS has no idea > > > > > > > this BAR is special and can not be rebalanced, so it might put the BAR > > > > > > > in the middle of the range, in effect fragmenting it. > > > > > > yep that's the only drawback in PCI approach. > > > > > > > > > > > > > Really I think something like V12 just rewritten using the new APIs > > > > > > > (probably with something like build_append_named_dword that I suggested) > > > > > > > would be much a simpler way to implement this device, given > > > > > > > the weird API limitations. > > > > > > We went over stating drawbacks of both approaches several times > > > > > > and that's where I strongly disagree with using v12 AML patching > > > > > > approach for reasons stated in those discussions. > > > > > > > > > > Yes, IIRC you dislike the need to allocate an IO range to pass address > > > > > to host, and to have costom code to migrate the address. > > > > allocating IO ports is fine by me but I'm against using bios_linker (ACPI) > > > > approach for task at hand, > > > > let me enumerate one more time the issues that make me dislike it so much > > > > (in order where most disliked ones go the first): > > > > > > > > 1. over-engineered for the task at hand, > > > > for device to become initialized guest OS has to execute AML, > > > > so init chain looks like: > > > > QEMU -> BIOS (patch AML) -> OS (AML write buf address to IO port) -> > > > > QEMU (update buf address) > > > > it's hell to debug when something doesn't work right in this chain > > > > > > Well this is not very different from e.g. virtio. > > > If it's just AML that worries you, we could teach BIOS/EFI a new command > > > to give some addresses after linking back to QEMU. Would this address > > > this issue? > > it would make it marginally better (especially from tests pov) > > though it won't fix other issues. > > > > > > > > > > > > even if there isn't any memory corruption that incorrect AML patching > > > > could introduce. > > > > As result of complexity patches are hard to review since one has > > > > to remember/relearn all details how bios_linker in QEMU and BIOS works, > > > > hence chance of regression is very high. > > > > Dynamically patched AML also introduces its own share of AML > > > > code that has to deal with dynamic buff address value. > > > > For an example: > > > > "nvdimm acpi: add _CRS" https://patchwork.ozlabs.org/patch/566697/ > > > > 27 liner patch could be just 5-6 lines if static (known in advance) > > > > buffer address were used to declare static _CRS variable. > > > > > > Problem is with finding a fixed address, and fragmentation that this > > > causes. Look at the mess we have with just allocating addresses for > > > RAM. I think it's a mistake to add to this mess. Either let's teach > > > management to specify an address map, or let guest allocate addresses > > > for us. > > > > Yep, problem here is in 'fixed' part but not in an address in general. > > Allowing Mgmt to specify address map partially works. See pc-dimm, > > on target side libvirt specifies address where it has been mapped > > on source. Initial RAM mess could be fixed for future machines types > > in similar way by replacing memory_region_allocate_system_memory() > > with pc-dimms, so that specific machine type could reproduce > > the same layout. > > But this requires management to specify the complete memory map. for clean start up I'd say it's impossible for mgmt to define complete or even partial memory map, it just doesn't have knowledge about machine internals. So GPAs should be allocated by QEMU (currently it's mostly handpicked fixed ones) > Otherwise we won't be able to change the layout, ever, > even when we change machine types. we won't be able to change layout for old machines but if we make layout configurable then new machines and probably with auto allocated GPAs then new machines could be more flexible wrt layout (but that just vague thoughts for now, maybe some day I'll try to implement it) > Migration is different as addresses are queried on source. > > Hmm, I did not realize someone might misuse this and > set address manually even without migration. > We need to find a way to prevent that before it's too late. > Eduardo - any ideas? it's late and it doesn't really matter as pc-dimm.addr is confined inside hotplug range. > > But default addresses doesn't appear magically and have to > > come from somewhere, so we have to have an address allocator > > somewhere. > > I feel for advanced functionality like vm gen id, > we could require all addresses to be specified. Any suggestion how would mgmt pick this address? > > If we put allocator into guest and emulate memory controller > > in QEMU, we probably would need to add fw_cfg interfaces > > that describe hardware which needs mapping (probably ever > > growing interface like it has been with ACPI tables, before > > we axed them in BIOS and moved them into QEMU). > > Alternatively we can put allocator in QEMU which could > > be simpler to implement and maintain since we won't need > > to implement extra fw_cfg interfaces and memory controller > > and ship/fix QEMU/BIOS pair in sync as it has been with ACPI > > in past. > > So the linker interface solves this rather neatly: > bios allocates memory, bios passes memory map to guest. > Served us well for several years without need for extensions, > and it does solve the VM GEN ID problem, even though > 1. it was never designed for huge areas like nvdimm seems to want to use > 2. we might want to add a new 64 bit flag to avoid touching low memory linker interface is fine for some readonly data, like ACPI tables especially fixed tables not so for AML ones is one wants to patch it. However now when you want to use it for other purposes you start adding extensions and other guest->QEMU channels to communicate patching info back. It steals guest's memory which is also not nice and doesn't scale well. > > > > > > 2. ACPI approach consumes guest usable RAM to allocate buffer > > > > and then makes device to DMA data in that RAM. > > > > That's a design point I don't agree with. > > > > > > Blame the broken VM GEN ID spec. > > > > > > For VM GEN ID, instead of DMA we could make ACPI code read PCI BAR and > > > copy data over, this would fix rebalancing but there is a problem with > > > this approach: it can not be done atomically (while VM is not yet > > > running and accessing RAM). So you can have guest read a partially > > > corrupted ID from memory. > > > > Yep, VM GEN ID spec is broken and we can't do anything about it as > > it's absolutely impossible to guaranty atomic update as guest OS > > has address of the buffer and can read it any time. Nothing could be > > done here. > > Hmm I thought we can stop VM while we are changing the ID. > But of course VM could be accessing it at the same time. > So I take this back, ACPI code reading PCI BAR and > writing data out to the buffer would be fine > from this point of view. spec is broken regardless of who reads BAR or RAM as read isn't atomic and UUID could be updated in the middle. So MS will have to live with that, unless they have a secret way to tell guest stop reading from that ADDR, do update and signal guest that it can read it. > > > > > > > And hey, nowdays we actually made fw cfg do DMA too. > > > > I'm not against DMA, it's direction which seems wrong to me. > > What we need here is a way to allocate GPA range and make sure > > that QEMU maps device memory there, PCI BAR is one > > of the ways to do it. > > OK fine, but returning PCI BAR address to guest is wrong. > How about reading it from ACPI then? Is it really > broken unless there's *also* a driver? I don't get question, MS Spec requires address (ADDR method), and it's read by ACPI (AML). As for working PCI_Config OpRegion without driver, I haven't tried, but I wouldn't be surprised if it doesn't, taking in account that MS introduced _DSM doesn't. > > > > > > Just compare with a graphics card design, where on device memory > > > > is mapped directly at some GPA not wasting RAM that guest could > > > > use for other tasks. > > > > > > This might have been true 20 years ago. Most modern cards do DMA. > > > > Modern cards, with it's own RAM, map its VRAM in address space directly > > and allow users use it (GEM API). So they do not waste conventional RAM. > > For example NVIDIA VRAM is mapped as PCI BARs the same way like in this > > series (even PCI class id is the same) > > Don't know enough about graphics really, I'm not sure how these are > relevant. NICs and disks certainly do DMA. And virtio gl seems to > mostly use guest RAM, not on card RAM. > > > > > VMGENID and NVDIMM use-cases look to me exactly the same, i.e. > > > > instead of consuming guest's RAM they should be mapped at > > > > some GPA and their memory accessed directly. > > > > > > VMGENID is tied to a spec that rather arbitrarily asks for a fixed > > > address. This breaks the straight-forward approach of using a > > > rebalanceable PCI BAR. > > > > For PCI rebalance to work on Windows, one has to provide working PCI driver > > otherwise OS will ignore it when rebalancing happens and > > might map something else over ignored BAR. > > Does it disable the BAR then? Or just move it elsewhere? it doesn't, it just blindly ignores BARs existence and maps BAR of another device with driver over it. > > > > > > > > In that case NVDIMM could even map whole label area and > > > > significantly simplify QEMU<->OSPM protocol that currently > > > > serializes that data through a 4K page. > > > > There is also performance issue with buffer allocated in RAM, > > > > because DMA adds unnecessary copying step when data could > > > > be read/written directly of NVDIMM. > > > > It might be no very important for _DSM interface but when it > > > > comes to supporting block mode it can become an issue. > > > > > > So for NVDIMM, presumably it will have code access PCI BAR properly, so > > > it's guaranteed to work across BAR rebalancing. > > > Would that address the performance issue? > > > > it would if rebalancing were to account for driverless PCI device BARs, > > but it doesn't hence such BARs need to be statically pinned > > at place where BIOS put them at start up. > > I'm also not sure that PCIConfig operation region would work > > on Windows without loaded driver (similar to _DSM case). > > > > > > > > Above points make ACPI patching approach not robust and fragile > > > > and hard to maintain. > > > > > > Wrt GEN ID these are all kind of subjective though. I especially don't > > > get what appears your general dislike of the linker host/guest > > > interface. > > Besides technical issues general dislike is just what I've written > > "not robust and fragile" bios_linker_loader_add_pointer() interface. > > > > to make it less fragile: > > 1. it should be impossible to corrupt memory or patch wrong address. > > current impl. silently relies on value referenced by 'pointer' argument > > and to figure that out one has to read linker code on BIOS side. > > That could be easily set wrong and slip through review. > > That's an API issue, it seemed like a good idea but I guess > it confuses people. Would you be happier using an offset > instead of a pointer? offset is better and it would be better if it were saying which offset it is (i.e. relative to what) > > > API shouldn't rely on the caller setting value pointed by that argument. > > I couldn't parse that one. Care suggesting a cleaner API for linker? here is current API signature: bios_linker_loader_add_pointer(GArray *linker, const char *dest_file, const char *src_file, GArray *table, void *pointer, uint8_t pointer_size) issue 1: where 'pointer' is a real pointer pointing inside 'table' and API calculates offset underhood: offset = (gchar *)pointer - table->data; and puts it in ADD_POINTER command. it's easy to get wrong offset if 'pointer' is not from 'table'. issue 2: 'pointer' points to another offset of size 'pointer_size' in 'table' blob, that means that whoever composes blob, has to aware of it and fill correct value there which is possible to do right if one looks inside of SeaBIOS part of linker interface. Which is easy to forget and then one has to deal with mess caused by random memory corruption. bios_linker_loader_add_pointer() and corresponding ADD_POINTER command should take this second offset as argument and do no require 'table' be pre-filled with it or in worst case if of extending ADD_POINTER command is problematic bios_linker_loader_add_pointer() should still take the second offset and patch 'table' itself so that 'table' composer don't have to worry about it. issue 3: all patching obviously needs bounds checking on QEMU side so it would abort early if it could corrupt memory. > > > 2. If it's going to be used for patching AML, it should assert > > when bios_linker_loader_add_pointer() is called if to be patched > > AML object is wrong and patching would corrupt AML blob. > > Hmm for example check that the patched data has > the expected pattern? yep, nothing could be done for raw tables but that should be possible for AML tables and if pattern is unsupported/size doesn't match it should abort QEMU early instead of corrupting table. > > > > > > It's there and we are not moving away from it, so why not > > > use it in more places? Or if you think it's wrong, why don't you build > > > something better then? We could then maybe use it for these things as > > > well. > > > > Yep, I think for vmgenid and even more so for nvdimm > > it would be better to allocate GPAs in QEMU and map backing > > MemoryRegions directly in QEMU. > > For nvdimm (main data region) > > we already do it using pc-dimm's GPA allocation algorithm, we also > > could use similar approach for nvdimm's label area and vmgenid. > > > > Here is a simple attempt to add a limited GPA allocator in high memory > > https://patchwork.ozlabs.org/patch/540852/ > > But it haven't got any comment from you and were ignored. > > Lets consider it and perhaps we could come up with GPA allocator > > that could be used for other things as well. > > For nvdimm label area, I agree passing things through > a 4K buffer seems inefficient. > > I'm not sure what's a better way though. > > Use 64 bit memory? Setting aside old guests such as XP, > does it break 32 bit guests? it might not work with 32bit guests, the same way as mem hotplug doesn't work for them unless they are PAE enabled. but well that's a limitation of implementation and considering that storage nvdimm area is mapped at 64bit GPA it doesn't matter. > I'm really afraid of adding yet another allocator, I think you > underestimate the maintainance headache: it's not theoretical and is > already felt. Current maintenance headache is due to fixed handpicked mem layout, we can't do much with it for legacy machine types but with QEMU side GPA allocator we can try to switch to a flexible memory layout that would allocate GPA depending on QEMU config in a stable manner. well there is maintenance headache with bios_linker as well due to its complexity (multiple layers of indirection) and it will grow when more places try to use it. Yep we could use it as a hack, stealing RAM and trying implement backwards DMA or we could be less afraid and consider yet another allocator which will do the job without hacks which should benefit QEMU in a long run (it might be not easy to impl. it right but if we won't even try we would be buried in complex hacks that 'work' for now) > > > > > > > > > > > > > > > > > > > And hey, if you want to use a pci device to pass the physical > > > > > > > address guest to host, instead of reserving > > > > > > > a couple of IO addresses, sure, stick it in pci config in > > > > > > > a vendor-specific capability, this way it'll get migrated > > > > > > > automatically. > > > > > > Could you elaborate more on this suggestion? > > > > > > > > > > I really just mean using PCI_Config operation region. > > > > > If you wish, I'll try to post a prototype next week. > > > > I don't know much about PCI but it would be interesting, > > > > perhaps we could use it somewhere else. > > > > > > > > However it should be checked if it works with Windows, > > > > for example PCI specific _DSM method is ignored by it > > > > if PCI device doesn't have working PCI driver bound to it. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --- > > > > > > > > changes since 17: > > > > > > > > - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > - make BAR prefetchable to make region cached as per MS spec > > > > > > > > - s/uuid/guid/ to match spec > > > > > > > > changes since 14: > > > > > > > > - reserve BAR resources so that Windows won't touch it > > > > > > > > during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > - ACPI: split VGEN device of PCI device descriptor > > > > > > > > and place it at PCI0 scope, so that won't be need trace its > > > > > > > > location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > - permit only one vmgenid to be created > > > > > > > > - enable BAR be mapped above 4Gb if it can't be mapped at low mem > > > > > > > > --- > > > > > > > > default-configs/i386-softmmu.mak | 1 + > > > > > > > > default-configs/x86_64-softmmu.mak | 1 + > > > > > > > > docs/specs/pci-ids.txt | 1 + > > > > > > > > hw/i386/acpi-build.c | 56 +++++++++++++- > > > > > > > > hw/misc/Makefile.objs | 1 + > > > > > > > > hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ > > > > > > > > include/hw/misc/vmgenid.h | 27 +++++++ > > > > > > > > include/hw/pci/pci.h | 1 + > > > > > > > > 8 files changed, 240 insertions(+), 2 deletions(-) > > > > > > > > create mode 100644 hw/misc/vmgenid.c > > > > > > > > create mode 100644 include/hw/misc/vmgenid.h > > > > > > > > > > > > > > > > diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak > > > > > > > > index b177e52..6402439 100644 > > > > > > > > --- a/default-configs/i386-softmmu.mak > > > > > > > > +++ b/default-configs/i386-softmmu.mak > > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > > CONFIG_IOAPIC=y > > > > > > > > CONFIG_PVPANIC=y > > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > > +CONFIG_VMGENID=y > > > > > > > > CONFIG_NVDIMM=y > > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > > CONFIG_XIO3130=y > > > > > > > > diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak > > > > > > > > index 6e3b312..fdac18f 100644 > > > > > > > > --- a/default-configs/x86_64-softmmu.mak > > > > > > > > +++ b/default-configs/x86_64-softmmu.mak > > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > > CONFIG_IOAPIC=y > > > > > > > > CONFIG_PVPANIC=y > > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > > +CONFIG_VMGENID=y > > > > > > > > CONFIG_NVDIMM=y > > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > > CONFIG_XIO3130=y > > > > > > > > diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt > > > > > > > > index 0adcb89..e65ecf9 100644 > > > > > > > > --- a/docs/specs/pci-ids.txt > > > > > > > > +++ b/docs/specs/pci-ids.txt > > > > > > > > @@ -47,6 +47,7 @@ PCI devices (other than virtio): > > > > > > > > 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) > > > > > > > > 1b36:0006 PCI Rocker Ethernet switch device > > > > > > > > 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) > > > > > > > > +1b36:0009 PCI VM-Generation device > > > > > > > > 1b36:000a PCI-PCI bridge (multiseat) > > > > > > > > > > > > > > > > All these devices are documented in docs/specs. > > > > > > > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c > > > > > > > > index 78758e2..0187262 100644 > > > > > > > > --- a/hw/i386/acpi-build.c > > > > > > > > +++ b/hw/i386/acpi-build.c > > > > > > > > @@ -44,6 +44,7 @@ > > > > > > > > #include "hw/acpi/tpm.h" > > > > > > > > #include "sysemu/tpm_backend.h" > > > > > > > > #include "hw/timer/mc146818rtc_regs.h" > > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > > > > > > > /* Supported chipsets: */ > > > > > > > > #include "hw/acpi/piix4.h" > > > > > > > > @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) > > > > > > > > info->applesmc_io_base = applesmc_port(); > > > > > > > > } > > > > > > > > > > > > > > > > +static Aml *build_vmgenid_device(uint64_t buf_paddr) > > > > > > > > +{ > > > > > > > > + Aml *dev, *pkg, *crs; > > > > > > > > + > > > > > > > > + dev = aml_device("VGEN"); > > > > > > > > + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); > > > > > > > > + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); > > > > > > > > + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); > > > > > > > > + > > > > > > > > + pkg = aml_package(2); > > > > > > > > + /* low 32 bits of UUID buffer addr */ > > > > > > > > + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); > > > > > > > > + /* high 32 bits of UUID buffer addr */ > > > > > > > > + aml_append(pkg, aml_int(buf_paddr >> 32)); > > > > > > > > + aml_append(dev, aml_name_decl("ADDR", pkg)); > > > > > > > > + > > > > > > > > + /* > > > > > > > > + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows > > > > > > > > + * displays it as "PCI RAM controller" which is marked as NO_DRV > > > > > > > > + * so Windows ignores VMGEN device completely and doesn't check > > > > > > > > + * for resource conflicts which during PCI rebalancing can lead > > > > > > > > + * to another PCI device claiming ignored BARs. To prevent this > > > > > > > > + * statically reserve resources used by VM_Gen_Counter. > > > > > > > > + * For more verbose comment see this commit message. > > > > > > > > > > > > > > What does "this commit message" mean? > > > > > > above commit message. Should I reword it to just 'see commit message' > > > > > > > > > > > > > > > > > > > > > + */ > > > > > > > > + crs = aml_resource_template(); > > > > > > > > + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, > > > > > > > > + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, > > > > > > > > + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, > > > > > > > > + VMGENID_VMGID_BUF_SIZE)); > > > > > > > > + aml_append(dev, aml_name_decl("_CRS", crs)); > > > > > > > > + return dev; > > > > > > > > +} > > > > > > > > + > > > > > > > > /* > > > > > > > > * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. > > > > > > > > * On i386 arch we only have two pci hosts, so we can look only for them. > > > > > > > > @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > > } > > > > > > > > > > > > > > > > if (bus) { > > > > > > > > + Object *vmgen; > > > > > > > > Aml *scope = aml_scope("PCI0"); > > > > > > > > /* Scan all PCI buses. Generate tables to support hotplug. */ > > > > > > > > build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); > > > > > > > > @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > > aml_append(scope, dev); > > > > > > > > } > > > > > > > > > > > > > > > > + vmgen = find_vmgneid_dev(NULL); > > > > > > > > + if (vmgen) { > > > > > > > > + PCIDevice *pdev = PCI_DEVICE(vmgen); > > > > > > > > + uint64_t buf_paddr = > > > > > > > > + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); > > > > > > > > + > > > > > > > > + if (buf_paddr != PCI_BAR_UNMAPPED) { > > > > > > > > + aml_append(scope, build_vmgenid_device(buf_paddr)); > > > > > > > > + > > > > > > > > + method = aml_method("\\_GPE._E00", 0, > > > > > > > > + AML_NOTSERIALIZED); > > > > > > > > + aml_append(method, > > > > > > > > + aml_notify(aml_name("\\_SB.PCI0.VGEN"), > > > > > > > > + aml_int(0x80))); > > > > > > > > + aml_append(ssdt, method); > > > > > > > > + } > > > > > > > > + } > > > > > > > > + > > > > > > > > aml_append(sb_scope, scope); > > > > > > > > } > > > > > > > > } > > > > > > > > @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, > > > > > > > > { > > > > > > > > aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); > > > > > > > > > > > > > > > > - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); > > > > > > > > - > > > > > > > > if (misc->is_piix4) { > > > > > > > > method = aml_method("_E01", 0, AML_NOTSERIALIZED); > > > > > > > > aml_append(method, > > > > > > > > diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs > > > > > > > > index d4765c2..1f05edd 100644 > > > > > > > > --- a/hw/misc/Makefile.objs > > > > > > > > +++ b/hw/misc/Makefile.objs > > > > > > > > @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o > > > > > > > > > > > > > > > > obj-$(CONFIG_PVPANIC) += pvpanic.o > > > > > > > > obj-$(CONFIG_EDU) += edu.o > > > > > > > > +obj-$(CONFIG_VMGENID) += vmgenid.o > > > > > > > > obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o > > > > > > > > diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c > > > > > > > > new file mode 100644 > > > > > > > > index 0000000..a2fbdfc > > > > > > > > --- /dev/null > > > > > > > > +++ b/hw/misc/vmgenid.c > > > > > > > > @@ -0,0 +1,154 @@ > > > > > > > > +/* > > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > > + * > > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > > + * > > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > > + * > > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > > + * > > > > > > > > + */ > > > > > > > > + > > > > > > > > +#include "hw/i386/pc.h" > > > > > > > > +#include "hw/pci/pci.h" > > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > +#include "hw/acpi/acpi.h" > > > > > > > > +#include "qapi/visitor.h" > > > > > > > > + > > > > > > > > +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) > > > > > > > > + > > > > > > > > +typedef struct VmGenIdState { > > > > > > > > + PCIDevice parent_obj; > > > > > > > > + MemoryRegion iomem; > > > > > > > > + union { > > > > > > > > + uint8_t guid[16]; > > > > > > > > + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; > > > > > > > > + }; > > > > > > > > + bool guid_set; > > > > > > > > +} VmGenIdState; > > > > > > > > + > > > > > > > > +Object *find_vmgneid_dev(Error **errp) > > > > > > > > +{ > > > > > > > > + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); > > > > > > > > + if (!obj) { > > > > > > > > + error_setg(errp, VMGENID_DEVICE " is not found"); > > > > > > > > + } > > > > > > > > + return obj; > > > > > > > > +} > > > > > > > > + > > > > > > > > +static void vmgenid_update_guest(VmGenIdState *s) > > > > > > > > +{ > > > > > > > > + Object *acpi_obj; > > > > > > > > + void *ptr = memory_region_get_ram_ptr(&s->iomem); > > > > > > > > + > > > > > > > > + memcpy(ptr, &s->guid, sizeof(s->guid)); > > > > > > > > + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); > > > > > > > > + > > > > > > > > + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); > > > > > > > > + if (acpi_obj) { > > > > > > > > + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); > > > > > > > > + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); > > > > > > > > + ACPIREGS *acpi_regs = adevc->regs(adev); > > > > > > > > + > > > > > > > > + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ > > > > > > > > + acpi_update_sci(acpi_regs, adevc->sci(adev)); > > > > > > > > + } > > > > > > > > +} > > > > > > > > + > > > > > > > > +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) > > > > > > > > +{ > > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > > + > > > > > > > > + if (qemu_uuid_parse(value, s->guid) < 0) { > > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID > > > > > > > > + "': Failed to parse GUID string: %s", > > > > > > > > + object_get_typename(OBJECT(s)), > > > > > > > > + value); > > > > > > > > + return; > > > > > > > > + } > > > > > > > > + > > > > > > > > + s->guid_set = true; > > > > > > > > + vmgenid_update_guest(s); > > > > > > > > +} > > > > > > > > + > > > > > > > > +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, > > > > > > > > + const char *name, Error **errp) > > > > > > > > +{ > > > > > > > > + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); > > > > > > > > + > > > > > > > > + if (value == PCI_BAR_UNMAPPED) { > > > > > > > > + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", > > > > > > > > + object_get_typename(OBJECT(obj))); > > > > > > > > + return; > > > > > > > > + } > > > > > > > > + visit_type_int(v, &value, name, errp); > > > > > > > > +} > > > > > > > > + > > > > > > > > +static void vmgenid_initfn(Object *obj) > > > > > > > > +{ > > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > > + > > > > > > > > + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), > > > > > > > > + &error_abort); > > > > > > > > + > > > > > > > > + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); > > > > > > > > + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", > > > > > > > > + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); > > > > > > > > +} > > > > > > > > + > > > > > > > > + > > > > > > > > +static void vmgenid_realize(PCIDevice *dev, Error **errp) > > > > > > > > +{ > > > > > > > > + VmGenIdState *s = VMGENID(dev); > > > > > > > > + bool ambiguous = false; > > > > > > > > + > > > > > > > > + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); > > > > > > > > + if (ambiguous) { > > > > > > > > + error_setg(errp, "no more than one " VMGENID_DEVICE > > > > > > > > + " device is permitted"); > > > > > > > > + return; > > > > > > > > + } > > > > > > > > + > > > > > > > > + if (!s->guid_set) { > > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", > > > > > > > > + object_get_typename(OBJECT(s))); > > > > > > > > + return; > > > > > > > > + } > > > > > > > > + > > > > > > > > + vmstate_register_ram(&s->iomem, DEVICE(s)); > > > > > > > > + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, > > > > > > > > + PCI_BASE_ADDRESS_MEM_PREFETCH | > > > > > > > > + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > > > > > > > > + &s->iomem); > > > > > > > > + return; > > > > > > > > +} > > > > > > > > + > > > > > > > > +static void vmgenid_class_init(ObjectClass *klass, void *data) > > > > > > > > +{ > > > > > > > > + DeviceClass *dc = DEVICE_CLASS(klass); > > > > > > > > + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); > > > > > > > > + > > > > > > > > + set_bit(DEVICE_CATEGORY_MISC, dc->categories); > > > > > > > > + dc->hotpluggable = false; > > > > > > > > + k->realize = vmgenid_realize; > > > > > > > > + k->vendor_id = PCI_VENDOR_ID_REDHAT; > > > > > > > > + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; > > > > > > > > + k->class_id = PCI_CLASS_MEMORY_RAM; > > > > > > > > +} > > > > > > > > + > > > > > > > > +static const TypeInfo vmgenid_device_info = { > > > > > > > > + .name = VMGENID_DEVICE, > > > > > > > > + .parent = TYPE_PCI_DEVICE, > > > > > > > > + .instance_size = sizeof(VmGenIdState), > > > > > > > > + .instance_init = vmgenid_initfn, > > > > > > > > + .class_init = vmgenid_class_init, > > > > > > > > +}; > > > > > > > > + > > > > > > > > +static void vmgenid_register_types(void) > > > > > > > > +{ > > > > > > > > + type_register_static(&vmgenid_device_info); > > > > > > > > +} > > > > > > > > + > > > > > > > > +type_init(vmgenid_register_types) > > > > > > > > diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h > > > > > > > > new file mode 100644 > > > > > > > > index 0000000..b90882c > > > > > > > > --- /dev/null > > > > > > > > +++ b/include/hw/misc/vmgenid.h > > > > > > > > @@ -0,0 +1,27 @@ > > > > > > > > +/* > > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > > + * > > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > > + * > > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > > + * > > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > > + * > > > > > > > > + */ > > > > > > > > + > > > > > > > > +#ifndef HW_MISC_VMGENID_H > > > > > > > > +#define HW_MISC_VMGENID_H > > > > > > > > + > > > > > > > > +#include "qom/object.h" > > > > > > > > + > > > > > > > > +#define VMGENID_DEVICE "vmgenid" > > > > > > > > +#define VMGENID_GUID "guid" > > > > > > > > +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" > > > > > > > > +#define VMGENID_VMGID_BUF_SIZE 0x1000 > > > > > > > > +#define VMGENID_VMGID_BUF_BAR 0 > > > > > > > > + > > > > > > > > +Object *find_vmgneid_dev(Error **errp); > > > > > > > > + > > > > > > > > +#endif > > > > > > > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > > > > > > > > index dedf277..f4c9d48 100644 > > > > > > > > --- a/include/hw/pci/pci.h > > > > > > > > +++ b/include/hw/pci/pci.h > > > > > > > > @@ -94,6 +94,7 @@ > > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 > > > > > > > > #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a > > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b > > > > > > > > +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c > > > > > > > > #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 > > > > > > > > > > > > > > > > #define FMT_PCIBUS PRIx64 > > > > > > > > -- > > > > > > > > 1.8.3.1 > > >
On Tue, Feb 09, 2016 at 11:46:08AM +0100, Igor Mammedov wrote: > > So the linker interface solves this rather neatly: > > bios allocates memory, bios passes memory map to guest. > > Served us well for several years without need for extensions, > > and it does solve the VM GEN ID problem, even though > > 1. it was never designed for huge areas like nvdimm seems to want to use > > 2. we might want to add a new 64 bit flag to avoid touching low memory > linker interface is fine for some readonly data, like ACPI tables > especially fixed tables not so for AML ones is one wants to patch it. > > However now when you want to use it for other purposes you start > adding extensions and other guest->QEMU channels to communicate > patching info back. > It steals guest's memory which is also not nice and doesn't scale well. This is an argument I don't get. memory is memory. call it guest memory or RAM backed PCI BAR - same thing. MMIO is cheaper of course but much slower. ... > > OK fine, but returning PCI BAR address to guest is wrong. > > How about reading it from ACPI then? Is it really > > broken unless there's *also* a driver? > I don't get question, MS Spec requires address (ADDR method), > and it's read by ACPI (AML). You were unhappy about DMA into guest memory. As a replacement for DMA, we could have AML read from e.g. PCI and write into RAM. This way we don't need to pass address to QEMU. > As for working PCI_Config OpRegion without driver, I haven't tried, > but I wouldn't be surprised if it doesn't, taking in account that > MS introduced _DSM doesn't. > > > > > > > > > > Just compare with a graphics card design, where on device memory > > > > > is mapped directly at some GPA not wasting RAM that guest could > > > > > use for other tasks. > > > > > > > > This might have been true 20 years ago. Most modern cards do DMA. > > > > > > Modern cards, with it's own RAM, map its VRAM in address space directly > > > and allow users use it (GEM API). So they do not waste conventional RAM. > > > For example NVIDIA VRAM is mapped as PCI BARs the same way like in this > > > series (even PCI class id is the same) > > > > Don't know enough about graphics really, I'm not sure how these are > > relevant. NICs and disks certainly do DMA. And virtio gl seems to > > mostly use guest RAM, not on card RAM. > > > > > > > VMGENID and NVDIMM use-cases look to me exactly the same, i.e. > > > > > instead of consuming guest's RAM they should be mapped at > > > > > some GPA and their memory accessed directly. > > > > > > > > VMGENID is tied to a spec that rather arbitrarily asks for a fixed > > > > address. This breaks the straight-forward approach of using a > > > > rebalanceable PCI BAR. > > > > > > For PCI rebalance to work on Windows, one has to provide working PCI driver > > > otherwise OS will ignore it when rebalancing happens and > > > might map something else over ignored BAR. > > > > Does it disable the BAR then? Or just move it elsewhere? > it doesn't, it just blindly ignores BARs existence and maps BAR of > another device with driver over it. Interesting. On classical PCI this is a forbidden configuration. Maybe we do something that confuses windows? Could you tell me how to reproduce this behaviour? > > > > > > > > > > > In that case NVDIMM could even map whole label area and > > > > > significantly simplify QEMU<->OSPM protocol that currently > > > > > serializes that data through a 4K page. > > > > > There is also performance issue with buffer allocated in RAM, > > > > > because DMA adds unnecessary copying step when data could > > > > > be read/written directly of NVDIMM. > > > > > It might be no very important for _DSM interface but when it > > > > > comes to supporting block mode it can become an issue. > > > > > > > > So for NVDIMM, presumably it will have code access PCI BAR properly, so > > > > it's guaranteed to work across BAR rebalancing. > > > > Would that address the performance issue? > > > > > > it would if rebalancing were to account for driverless PCI device BARs, > > > but it doesn't hence such BARs need to be statically pinned > > > at place where BIOS put them at start up. > > > I'm also not sure that PCIConfig operation region would work > > > on Windows without loaded driver (similar to _DSM case). > > > > > > > > > > > Above points make ACPI patching approach not robust and fragile > > > > > and hard to maintain. > > > > > > > > Wrt GEN ID these are all kind of subjective though. I especially don't > > > > get what appears your general dislike of the linker host/guest > > > > interface. > > > Besides technical issues general dislike is just what I've written > > > "not robust and fragile" bios_linker_loader_add_pointer() interface. > > > > > > to make it less fragile: > > > 1. it should be impossible to corrupt memory or patch wrong address. > > > current impl. silently relies on value referenced by 'pointer' argument > > > and to figure that out one has to read linker code on BIOS side. > > > That could be easily set wrong and slip through review. > > > > That's an API issue, it seemed like a good idea but I guess > > it confuses people. Would you be happier using an offset > > instead of a pointer? > offset is better and it would be better if it were saying > which offset it is (i.e. relative to what) Start of table, right? > > > > > API shouldn't rely on the caller setting value pointed by that argument. > > > > I couldn't parse that one. Care suggesting a cleaner API for linker? > here is current API signature: > > bios_linker_loader_add_pointer(GArray *linker, > const char *dest_file, > const char *src_file, > GArray *table, void *pointer, > uint8_t pointer_size) > > issue 1: > where 'pointer' is a real pointer pointing inside 'table' and API > calculates offset underhood: > offset = (gchar *)pointer - table->data; > and puts it in ADD_POINTER command. > > it's easy to get wrong offset if 'pointer' is not from 'table'. OK, replace that with table_offset? > issue 2: > 'pointer' points to another offset of size 'pointer_size' in 'table' > blob, that means that whoever composes blob, has to aware of > it and fill correct value there which is possible to do right > if one looks inside of SeaBIOS part of linker interface. > Which is easy to forget and then one has to deal with mess > caused by random memory corruption. > > bios_linker_loader_add_pointer() and corresponding > ADD_POINTER command should take this second offset as argument > and do no require 'table' be pre-filled with it or > in worst case if of extending ADD_POINTER command is problematic > bios_linker_loader_add_pointer() should still take > the second offset and patch 'table' itself so that 'table' composer > don't have to worry about it. This one I don't understand. What's the second pointer you are talking about? > issue 3: > all patching obviously needs bounds checking on QEMU side > so it would abort early if it could corrupt memory. That's easy. > > > > > 2. If it's going to be used for patching AML, it should assert > > > when bios_linker_loader_add_pointer() is called if to be patched > > > AML object is wrong and patching would corrupt AML blob. > > > > Hmm for example check that the patched data has > > the expected pattern? > yep, nothing could be done for raw tables but that should be possible > for AML tables and if pattern is unsupported/size doesn't match > it should abort QEMU early instead of corrupting table. Above all sounds reasonable. Would you like to take a stub at it or prefer me to? > > > > > > > > > It's there and we are not moving away from it, so why not > > > > use it in more places? Or if you think it's wrong, why don't you build > > > > something better then? We could then maybe use it for these things as > > > > well. > > > > > > Yep, I think for vmgenid and even more so for nvdimm > > > it would be better to allocate GPAs in QEMU and map backing > > > MemoryRegions directly in QEMU. > > > For nvdimm (main data region) > > > we already do it using pc-dimm's GPA allocation algorithm, we also > > > could use similar approach for nvdimm's label area and vmgenid. > > > > > > Here is a simple attempt to add a limited GPA allocator in high memory > > > https://patchwork.ozlabs.org/patch/540852/ > > > But it haven't got any comment from you and were ignored. > > > Lets consider it and perhaps we could come up with GPA allocator > > > that could be used for other things as well. > > > > For nvdimm label area, I agree passing things through > > a 4K buffer seems inefficient. > > > > I'm not sure what's a better way though. > > > > Use 64 bit memory? Setting aside old guests such as XP, > > does it break 32 bit guests? > it might not work with 32bit guests, the same way as mem hotplug > doesn't work for them unless they are PAE enabled. Right, I mean with PAE. > but well that's a limitation of implementation and considering > that storage nvdimm area is mapped at 64bit GPA it doesn't matter. > > > I'm really afraid of adding yet another allocator, I think you > > underestimate the maintainance headache: it's not theoretical and is > > already felt. > Current maintenance headache is due to fixed handpicked > mem layout, we can't do much with it for legacy machine > types but with QEMU side GPA allocator we can try to switch > to a flexible memory layout that would allocate GPA > depending on QEMU config in a stable manner. So far, we didn't manage to. It seems to go in the reverse direction were we add more and more control to let management influence the layout. Things like alignment requirements also tend to surface later and wreck havoc on whatever we do. > well there is maintenance headache with bios_linker as well > due to its complexity (multiple layers of indirection) and > it will grow when more places try to use it. > Yep we could use it as a hack, stealing RAM and trying implement > backwards DMA or we could be less afraid and consider > yet another allocator which will do the job without hacks > which should benefit QEMU in a long run (it might be not easy > to impl. it right but if we won't even try we would be buried > in complex hacks that 'work' for now) > > > > > > > > > > > > > > > > > > > > > > > > And hey, if you want to use a pci device to pass the physical > > > > > > > > address guest to host, instead of reserving > > > > > > > > a couple of IO addresses, sure, stick it in pci config in > > > > > > > > a vendor-specific capability, this way it'll get migrated > > > > > > > > automatically. > > > > > > > Could you elaborate more on this suggestion? > > > > > > > > > > > > I really just mean using PCI_Config operation region. > > > > > > If you wish, I'll try to post a prototype next week. > > > > > I don't know much about PCI but it would be interesting, > > > > > perhaps we could use it somewhere else. > > > > > > > > > > However it should be checked if it works with Windows, > > > > > for example PCI specific _DSM method is ignored by it > > > > > if PCI device doesn't have working PCI driver bound to it. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --- > > > > > > > > > changes since 17: > > > > > > > > > - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > - make BAR prefetchable to make region cached as per MS spec > > > > > > > > > - s/uuid/guid/ to match spec > > > > > > > > > changes since 14: > > > > > > > > > - reserve BAR resources so that Windows won't touch it > > > > > > > > > during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > - ACPI: split VGEN device of PCI device descriptor > > > > > > > > > and place it at PCI0 scope, so that won't be need trace its > > > > > > > > > location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > - permit only one vmgenid to be created > > > > > > > > > - enable BAR be mapped above 4Gb if it can't be mapped at low mem > > > > > > > > > --- > > > > > > > > > default-configs/i386-softmmu.mak | 1 + > > > > > > > > > default-configs/x86_64-softmmu.mak | 1 + > > > > > > > > > docs/specs/pci-ids.txt | 1 + > > > > > > > > > hw/i386/acpi-build.c | 56 +++++++++++++- > > > > > > > > > hw/misc/Makefile.objs | 1 + > > > > > > > > > hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ > > > > > > > > > include/hw/misc/vmgenid.h | 27 +++++++ > > > > > > > > > include/hw/pci/pci.h | 1 + > > > > > > > > > 8 files changed, 240 insertions(+), 2 deletions(-) > > > > > > > > > create mode 100644 hw/misc/vmgenid.c > > > > > > > > > create mode 100644 include/hw/misc/vmgenid.h > > > > > > > > > > > > > > > > > > diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak > > > > > > > > > index b177e52..6402439 100644 > > > > > > > > > --- a/default-configs/i386-softmmu.mak > > > > > > > > > +++ b/default-configs/i386-softmmu.mak > > > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > > > CONFIG_IOAPIC=y > > > > > > > > > CONFIG_PVPANIC=y > > > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > > > +CONFIG_VMGENID=y > > > > > > > > > CONFIG_NVDIMM=y > > > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > > > CONFIG_XIO3130=y > > > > > > > > > diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak > > > > > > > > > index 6e3b312..fdac18f 100644 > > > > > > > > > --- a/default-configs/x86_64-softmmu.mak > > > > > > > > > +++ b/default-configs/x86_64-softmmu.mak > > > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > > > CONFIG_IOAPIC=y > > > > > > > > > CONFIG_PVPANIC=y > > > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > > > +CONFIG_VMGENID=y > > > > > > > > > CONFIG_NVDIMM=y > > > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > > > CONFIG_XIO3130=y > > > > > > > > > diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt > > > > > > > > > index 0adcb89..e65ecf9 100644 > > > > > > > > > --- a/docs/specs/pci-ids.txt > > > > > > > > > +++ b/docs/specs/pci-ids.txt > > > > > > > > > @@ -47,6 +47,7 @@ PCI devices (other than virtio): > > > > > > > > > 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) > > > > > > > > > 1b36:0006 PCI Rocker Ethernet switch device > > > > > > > > > 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) > > > > > > > > > +1b36:0009 PCI VM-Generation device > > > > > > > > > 1b36:000a PCI-PCI bridge (multiseat) > > > > > > > > > > > > > > > > > > All these devices are documented in docs/specs. > > > > > > > > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c > > > > > > > > > index 78758e2..0187262 100644 > > > > > > > > > --- a/hw/i386/acpi-build.c > > > > > > > > > +++ b/hw/i386/acpi-build.c > > > > > > > > > @@ -44,6 +44,7 @@ > > > > > > > > > #include "hw/acpi/tpm.h" > > > > > > > > > #include "sysemu/tpm_backend.h" > > > > > > > > > #include "hw/timer/mc146818rtc_regs.h" > > > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > > > > > > > > > /* Supported chipsets: */ > > > > > > > > > #include "hw/acpi/piix4.h" > > > > > > > > > @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) > > > > > > > > > info->applesmc_io_base = applesmc_port(); > > > > > > > > > } > > > > > > > > > > > > > > > > > > +static Aml *build_vmgenid_device(uint64_t buf_paddr) > > > > > > > > > +{ > > > > > > > > > + Aml *dev, *pkg, *crs; > > > > > > > > > + > > > > > > > > > + dev = aml_device("VGEN"); > > > > > > > > > + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); > > > > > > > > > + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); > > > > > > > > > + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); > > > > > > > > > + > > > > > > > > > + pkg = aml_package(2); > > > > > > > > > + /* low 32 bits of UUID buffer addr */ > > > > > > > > > + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); > > > > > > > > > + /* high 32 bits of UUID buffer addr */ > > > > > > > > > + aml_append(pkg, aml_int(buf_paddr >> 32)); > > > > > > > > > + aml_append(dev, aml_name_decl("ADDR", pkg)); > > > > > > > > > + > > > > > > > > > + /* > > > > > > > > > + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows > > > > > > > > > + * displays it as "PCI RAM controller" which is marked as NO_DRV > > > > > > > > > + * so Windows ignores VMGEN device completely and doesn't check > > > > > > > > > + * for resource conflicts which during PCI rebalancing can lead > > > > > > > > > + * to another PCI device claiming ignored BARs. To prevent this > > > > > > > > > + * statically reserve resources used by VM_Gen_Counter. > > > > > > > > > + * For more verbose comment see this commit message. > > > > > > > > > > > > > > > > What does "this commit message" mean? > > > > > > > above commit message. Should I reword it to just 'see commit message' > > > > > > > > > > > > > > > > > > > > > > > > + */ > > > > > > > > > + crs = aml_resource_template(); > > > > > > > > > + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, > > > > > > > > > + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, > > > > > > > > > + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, > > > > > > > > > + VMGENID_VMGID_BUF_SIZE)); > > > > > > > > > + aml_append(dev, aml_name_decl("_CRS", crs)); > > > > > > > > > + return dev; > > > > > > > > > +} > > > > > > > > > + > > > > > > > > > /* > > > > > > > > > * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. > > > > > > > > > * On i386 arch we only have two pci hosts, so we can look only for them. > > > > > > > > > @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > > > } > > > > > > > > > > > > > > > > > > if (bus) { > > > > > > > > > + Object *vmgen; > > > > > > > > > Aml *scope = aml_scope("PCI0"); > > > > > > > > > /* Scan all PCI buses. Generate tables to support hotplug. */ > > > > > > > > > build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); > > > > > > > > > @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > > > aml_append(scope, dev); > > > > > > > > > } > > > > > > > > > > > > > > > > > > + vmgen = find_vmgneid_dev(NULL); > > > > > > > > > + if (vmgen) { > > > > > > > > > + PCIDevice *pdev = PCI_DEVICE(vmgen); > > > > > > > > > + uint64_t buf_paddr = > > > > > > > > > + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); > > > > > > > > > + > > > > > > > > > + if (buf_paddr != PCI_BAR_UNMAPPED) { > > > > > > > > > + aml_append(scope, build_vmgenid_device(buf_paddr)); > > > > > > > > > + > > > > > > > > > + method = aml_method("\\_GPE._E00", 0, > > > > > > > > > + AML_NOTSERIALIZED); > > > > > > > > > + aml_append(method, > > > > > > > > > + aml_notify(aml_name("\\_SB.PCI0.VGEN"), > > > > > > > > > + aml_int(0x80))); > > > > > > > > > + aml_append(ssdt, method); > > > > > > > > > + } > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > aml_append(sb_scope, scope); > > > > > > > > > } > > > > > > > > > } > > > > > > > > > @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, > > > > > > > > > { > > > > > > > > > aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); > > > > > > > > > > > > > > > > > > - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); > > > > > > > > > - > > > > > > > > > if (misc->is_piix4) { > > > > > > > > > method = aml_method("_E01", 0, AML_NOTSERIALIZED); > > > > > > > > > aml_append(method, > > > > > > > > > diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs > > > > > > > > > index d4765c2..1f05edd 100644 > > > > > > > > > --- a/hw/misc/Makefile.objs > > > > > > > > > +++ b/hw/misc/Makefile.objs > > > > > > > > > @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o > > > > > > > > > > > > > > > > > > obj-$(CONFIG_PVPANIC) += pvpanic.o > > > > > > > > > obj-$(CONFIG_EDU) += edu.o > > > > > > > > > +obj-$(CONFIG_VMGENID) += vmgenid.o > > > > > > > > > obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o > > > > > > > > > diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c > > > > > > > > > new file mode 100644 > > > > > > > > > index 0000000..a2fbdfc > > > > > > > > > --- /dev/null > > > > > > > > > +++ b/hw/misc/vmgenid.c > > > > > > > > > @@ -0,0 +1,154 @@ > > > > > > > > > +/* > > > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > > > + * > > > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > > > + * > > > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > > > + * > > > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > > > + * > > > > > > > > > + */ > > > > > > > > > + > > > > > > > > > +#include "hw/i386/pc.h" > > > > > > > > > +#include "hw/pci/pci.h" > > > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > +#include "hw/acpi/acpi.h" > > > > > > > > > +#include "qapi/visitor.h" > > > > > > > > > + > > > > > > > > > +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) > > > > > > > > > + > > > > > > > > > +typedef struct VmGenIdState { > > > > > > > > > + PCIDevice parent_obj; > > > > > > > > > + MemoryRegion iomem; > > > > > > > > > + union { > > > > > > > > > + uint8_t guid[16]; > > > > > > > > > + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; > > > > > > > > > + }; > > > > > > > > > + bool guid_set; > > > > > > > > > +} VmGenIdState; > > > > > > > > > + > > > > > > > > > +Object *find_vmgneid_dev(Error **errp) > > > > > > > > > +{ > > > > > > > > > + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); > > > > > > > > > + if (!obj) { > > > > > > > > > + error_setg(errp, VMGENID_DEVICE " is not found"); > > > > > > > > > + } > > > > > > > > > + return obj; > > > > > > > > > +} > > > > > > > > > + > > > > > > > > > +static void vmgenid_update_guest(VmGenIdState *s) > > > > > > > > > +{ > > > > > > > > > + Object *acpi_obj; > > > > > > > > > + void *ptr = memory_region_get_ram_ptr(&s->iomem); > > > > > > > > > + > > > > > > > > > + memcpy(ptr, &s->guid, sizeof(s->guid)); > > > > > > > > > + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); > > > > > > > > > + > > > > > > > > > + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); > > > > > > > > > + if (acpi_obj) { > > > > > > > > > + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); > > > > > > > > > + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); > > > > > > > > > + ACPIREGS *acpi_regs = adevc->regs(adev); > > > > > > > > > + > > > > > > > > > + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ > > > > > > > > > + acpi_update_sci(acpi_regs, adevc->sci(adev)); > > > > > > > > > + } > > > > > > > > > +} > > > > > > > > > + > > > > > > > > > +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) > > > > > > > > > +{ > > > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > > > + > > > > > > > > > + if (qemu_uuid_parse(value, s->guid) < 0) { > > > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID > > > > > > > > > + "': Failed to parse GUID string: %s", > > > > > > > > > + object_get_typename(OBJECT(s)), > > > > > > > > > + value); > > > > > > > > > + return; > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > + s->guid_set = true; > > > > > > > > > + vmgenid_update_guest(s); > > > > > > > > > +} > > > > > > > > > + > > > > > > > > > +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, > > > > > > > > > + const char *name, Error **errp) > > > > > > > > > +{ > > > > > > > > > + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); > > > > > > > > > + > > > > > > > > > + if (value == PCI_BAR_UNMAPPED) { > > > > > > > > > + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", > > > > > > > > > + object_get_typename(OBJECT(obj))); > > > > > > > > > + return; > > > > > > > > > + } > > > > > > > > > + visit_type_int(v, &value, name, errp); > > > > > > > > > +} > > > > > > > > > + > > > > > > > > > +static void vmgenid_initfn(Object *obj) > > > > > > > > > +{ > > > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > > > + > > > > > > > > > + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), > > > > > > > > > + &error_abort); > > > > > > > > > + > > > > > > > > > + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); > > > > > > > > > + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", > > > > > > > > > + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); > > > > > > > > > +} > > > > > > > > > + > > > > > > > > > + > > > > > > > > > +static void vmgenid_realize(PCIDevice *dev, Error **errp) > > > > > > > > > +{ > > > > > > > > > + VmGenIdState *s = VMGENID(dev); > > > > > > > > > + bool ambiguous = false; > > > > > > > > > + > > > > > > > > > + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); > > > > > > > > > + if (ambiguous) { > > > > > > > > > + error_setg(errp, "no more than one " VMGENID_DEVICE > > > > > > > > > + " device is permitted"); > > > > > > > > > + return; > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > + if (!s->guid_set) { > > > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", > > > > > > > > > + object_get_typename(OBJECT(s))); > > > > > > > > > + return; > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > + vmstate_register_ram(&s->iomem, DEVICE(s)); > > > > > > > > > + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, > > > > > > > > > + PCI_BASE_ADDRESS_MEM_PREFETCH | > > > > > > > > > + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > > > > > > > > > + &s->iomem); > > > > > > > > > + return; > > > > > > > > > +} > > > > > > > > > + > > > > > > > > > +static void vmgenid_class_init(ObjectClass *klass, void *data) > > > > > > > > > +{ > > > > > > > > > + DeviceClass *dc = DEVICE_CLASS(klass); > > > > > > > > > + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); > > > > > > > > > + > > > > > > > > > + set_bit(DEVICE_CATEGORY_MISC, dc->categories); > > > > > > > > > + dc->hotpluggable = false; > > > > > > > > > + k->realize = vmgenid_realize; > > > > > > > > > + k->vendor_id = PCI_VENDOR_ID_REDHAT; > > > > > > > > > + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; > > > > > > > > > + k->class_id = PCI_CLASS_MEMORY_RAM; > > > > > > > > > +} > > > > > > > > > + > > > > > > > > > +static const TypeInfo vmgenid_device_info = { > > > > > > > > > + .name = VMGENID_DEVICE, > > > > > > > > > + .parent = TYPE_PCI_DEVICE, > > > > > > > > > + .instance_size = sizeof(VmGenIdState), > > > > > > > > > + .instance_init = vmgenid_initfn, > > > > > > > > > + .class_init = vmgenid_class_init, > > > > > > > > > +}; > > > > > > > > > + > > > > > > > > > +static void vmgenid_register_types(void) > > > > > > > > > +{ > > > > > > > > > + type_register_static(&vmgenid_device_info); > > > > > > > > > +} > > > > > > > > > + > > > > > > > > > +type_init(vmgenid_register_types) > > > > > > > > > diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h > > > > > > > > > new file mode 100644 > > > > > > > > > index 0000000..b90882c > > > > > > > > > --- /dev/null > > > > > > > > > +++ b/include/hw/misc/vmgenid.h > > > > > > > > > @@ -0,0 +1,27 @@ > > > > > > > > > +/* > > > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > > > + * > > > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > > > + * > > > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > > > + * > > > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > > > + * > > > > > > > > > + */ > > > > > > > > > + > > > > > > > > > +#ifndef HW_MISC_VMGENID_H > > > > > > > > > +#define HW_MISC_VMGENID_H > > > > > > > > > + > > > > > > > > > +#include "qom/object.h" > > > > > > > > > + > > > > > > > > > +#define VMGENID_DEVICE "vmgenid" > > > > > > > > > +#define VMGENID_GUID "guid" > > > > > > > > > +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" > > > > > > > > > +#define VMGENID_VMGID_BUF_SIZE 0x1000 > > > > > > > > > +#define VMGENID_VMGID_BUF_BAR 0 > > > > > > > > > + > > > > > > > > > +Object *find_vmgneid_dev(Error **errp); > > > > > > > > > + > > > > > > > > > +#endif > > > > > > > > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > > > > > > > > > index dedf277..f4c9d48 100644 > > > > > > > > > --- a/include/hw/pci/pci.h > > > > > > > > > +++ b/include/hw/pci/pci.h > > > > > > > > > @@ -94,6 +94,7 @@ > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b > > > > > > > > > +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 > > > > > > > > > > > > > > > > > > #define FMT_PCIBUS PRIx64 > > > > > > > > > -- > > > > > > > > > 1.8.3.1 > > > >
On Tue, Feb 09, 2016 at 11:46:08AM +0100, Igor Mammedov wrote: > > > > > 2. ACPI approach consumes guest usable RAM to allocate buffer > > > > > and then makes device to DMA data in that RAM. > > > > > That's a design point I don't agree with. > > > > > > > > Blame the broken VM GEN ID spec. > > > > > > > > For VM GEN ID, instead of DMA we could make ACPI code read PCI BAR and > > > > copy data over, this would fix rebalancing but there is a problem with > > > > this approach: it can not be done atomically (while VM is not yet > > > > running and accessing RAM). So you can have guest read a partially > > > > corrupted ID from memory. > > > > > > Yep, VM GEN ID spec is broken and we can't do anything about it as > > > it's absolutely impossible to guaranty atomic update as guest OS > > > has address of the buffer and can read it any time. Nothing could be > > > done here. > > > > Hmm I thought we can stop VM while we are changing the ID. > > But of course VM could be accessing it at the same time. > > So I take this back, ACPI code reading PCI BAR and > > writing data out to the buffer would be fine > > from this point of view. > spec is broken regardless of who reads BAR or RAM as read > isn't atomic and UUID could be updated in the middle. > So MS will have to live with that, unless they have a secret > way to tell guest stop reading from that ADDR, do update and > signal guest that it can read it. And really, that the issue that's driving this up to v19. There's no good way to implement a bad spec. It feels more like a failed experiment than like a thought through interface. So maybe we should leave this alone, wait until we see an actual user - this way we can figure out the implementation constraints better.
On Tue, 9 Feb 2016 14:17:44 +0200 "Michael S. Tsirkin" <mst@redhat.com> wrote: > On Tue, Feb 09, 2016 at 11:46:08AM +0100, Igor Mammedov wrote: > > > So the linker interface solves this rather neatly: > > > bios allocates memory, bios passes memory map to guest. > > > Served us well for several years without need for extensions, > > > and it does solve the VM GEN ID problem, even though > > > 1. it was never designed for huge areas like nvdimm seems to want to use > > > 2. we might want to add a new 64 bit flag to avoid touching low memory > > linker interface is fine for some readonly data, like ACPI tables > > especially fixed tables not so for AML ones is one wants to patch it. > > > > However now when you want to use it for other purposes you start > > adding extensions and other guest->QEMU channels to communicate > > patching info back. > > It steals guest's memory which is also not nice and doesn't scale well. > > This is an argument I don't get. memory is memory. call it guest memory > or RAM backed PCI BAR - same thing. MMIO is cheaper of course > but much slower. > > ... It however matters for user, he pays for guest with XXX RAM but gets less than that. And that will be getting worse as a number of such devices increases. > > > OK fine, but returning PCI BAR address to guest is wrong. > > > How about reading it from ACPI then? Is it really > > > broken unless there's *also* a driver? > > I don't get question, MS Spec requires address (ADDR method), > > and it's read by ACPI (AML). > > You were unhappy about DMA into guest memory. > As a replacement for DMA, we could have AML read from > e.g. PCI and write into RAM. > This way we don't need to pass address to QEMU. That sounds better as it saves us from allocation of IO port and QEMU don't need to write into guest memory, the only question is if PCI_Config opregion would work with driver-less PCI device. And it's still pretty much not test-able since it would require fully running OSPM to execute AML side. > > > As for working PCI_Config OpRegion without driver, I haven't tried, > > but I wouldn't be surprised if it doesn't, taking in account that > > MS introduced _DSM doesn't. > > > > > > > > > > > > > > Just compare with a graphics card design, where on device memory > > > > > > is mapped directly at some GPA not wasting RAM that guest could > > > > > > use for other tasks. > > > > > > > > > > This might have been true 20 years ago. Most modern cards do DMA. > > > > > > > > Modern cards, with it's own RAM, map its VRAM in address space directly > > > > and allow users use it (GEM API). So they do not waste conventional RAM. > > > > For example NVIDIA VRAM is mapped as PCI BARs the same way like in this > > > > series (even PCI class id is the same) > > > > > > Don't know enough about graphics really, I'm not sure how these are > > > relevant. NICs and disks certainly do DMA. And virtio gl seems to > > > mostly use guest RAM, not on card RAM. > > > > > > > > > VMGENID and NVDIMM use-cases look to me exactly the same, i.e. > > > > > > instead of consuming guest's RAM they should be mapped at > > > > > > some GPA and their memory accessed directly. > > > > > > > > > > VMGENID is tied to a spec that rather arbitrarily asks for a fixed > > > > > address. This breaks the straight-forward approach of using a > > > > > rebalanceable PCI BAR. > > > > > > > > For PCI rebalance to work on Windows, one has to provide working PCI driver > > > > otherwise OS will ignore it when rebalancing happens and > > > > might map something else over ignored BAR. > > > > > > Does it disable the BAR then? Or just move it elsewhere? > > it doesn't, it just blindly ignores BARs existence and maps BAR of > > another device with driver over it. > > Interesting. On classical PCI this is a forbidden configuration. > Maybe we do something that confuses windows? > Could you tell me how to reproduce this behaviour? #cat > t << EOF pci_update_mappings_del pci_update_mappings_add EOF #./x86_64-softmmu/qemu-system-x86_64 -snapshot -enable-kvm -snapshot \ -monitor unix:/tmp/m,server,nowait -device pci-bridge,chassis_nr=1 \ -boot menu=on -m 4G -trace events=t ws2012r2x64dc.img \ -device ivshmem,id=foo,size=2M,shm,bus=pci.1,addr=01 wait till OS boots, note BARs programmed for ivshmem in my case it was 01:01.0 0,0xfe800000+0x100 then execute script and watch pci_update_mappings* trace events # for i in $(seq 3 18); do printf -- "device_add e1000,bus=pci.1,addr=%x\n" $i | nc -U /tmp/m; sleep 5; done; hotplugging e1000,bus=pci.1,addr=12 triggers rebalancing where Windows unmaps all BARs of nics on bridge but doesn't touch ivshmem and then programs new BARs, where: pci_update_mappings_add d=0x7fa02ff0cf90 01:11.0 0,0xfe800000+0x20000 creates overlapping BAR with ivshmem > > > > > > > > > > > > > > > In that case NVDIMM could even map whole label area and > > > > > > significantly simplify QEMU<->OSPM protocol that currently > > > > > > serializes that data through a 4K page. > > > > > > There is also performance issue with buffer allocated in RAM, > > > > > > because DMA adds unnecessary copying step when data could > > > > > > be read/written directly of NVDIMM. > > > > > > It might be no very important for _DSM interface but when it > > > > > > comes to supporting block mode it can become an issue. > > > > > > > > > > So for NVDIMM, presumably it will have code access PCI BAR properly, so > > > > > it's guaranteed to work across BAR rebalancing. > > > > > Would that address the performance issue? > > > > > > > > it would if rebalancing were to account for driverless PCI device BARs, > > > > but it doesn't hence such BARs need to be statically pinned > > > > at place where BIOS put them at start up. > > > > I'm also not sure that PCIConfig operation region would work > > > > on Windows without loaded driver (similar to _DSM case). > > > > > > > > > > > > > > Above points make ACPI patching approach not robust and fragile > > > > > > and hard to maintain. > > > > > > > > > > Wrt GEN ID these are all kind of subjective though. I especially don't > > > > > get what appears your general dislike of the linker host/guest > > > > > interface. > > > > Besides technical issues general dislike is just what I've written > > > > "not robust and fragile" bios_linker_loader_add_pointer() interface. > > > > > > > > to make it less fragile: > > > > 1. it should be impossible to corrupt memory or patch wrong address. > > > > current impl. silently relies on value referenced by 'pointer' argument > > > > and to figure that out one has to read linker code on BIOS side. > > > > That could be easily set wrong and slip through review. > > > > > > That's an API issue, it seemed like a good idea but I guess > > > it confuses people. Would you be happier using an offset > > > instead of a pointer? > > offset is better and it would be better if it were saying > > which offset it is (i.e. relative to what) > > > Start of table, right? not sure, to me it looks like start of a blob and not the table > > > > > > > > API shouldn't rely on the caller setting value pointed by that argument. > > > > > > I couldn't parse that one. Care suggesting a cleaner API for linker? > > here is current API signature: > > > > bios_linker_loader_add_pointer(GArray *linker, > > const char *dest_file, > > const char *src_file, > > GArray *table, void *pointer, > > uint8_t pointer_size) > > > > issue 1: > > where 'pointer' is a real pointer pointing inside 'table' and API > > calculates offset underhood: > > offset = (gchar *)pointer - table->data; > > and puts it in ADD_POINTER command. > > > > it's easy to get wrong offset if 'pointer' is not from 'table'. > > OK, replace that with table_offset? blob_offset? also s/table/blob/ > > > issue 2: > > 'pointer' points to another offset of size 'pointer_size' in 'table' > > blob, that means that whoever composes blob, has to aware of > > it and fill correct value there which is possible to do right > > if one looks inside of SeaBIOS part of linker interface. > > Which is easy to forget and then one has to deal with mess > > caused by random memory corruption. > > > > bios_linker_loader_add_pointer() and corresponding > > ADD_POINTER command should take this second offset as argument > > and do no require 'table' be pre-filled with it or > > in worst case if of extending ADD_POINTER command is problematic > > bios_linker_loader_add_pointer() should still take > > the second offset and patch 'table' itself so that 'table' composer > > don't have to worry about it. > > This one I don't understand. What's the second pointer you > are talking about? ha, see even the author already has absolutely no clue how linker works and about what offsets are relative to. see SeaBIOS romfile_loader_add_pointer(): ... memcpy(&pointer, dest_file->data + offset, entry->pointer_size); here is the second offset ^^^^^^^^^^^^^ it should be properly named field of ADD_POINTER command and the not part of data blob. pointer = le64_to_cpu(pointer); pointer += (unsigned long)src_file->data; pointer = cpu_to_le64(pointer); memcpy(dest_file->data + offset, &pointer, entry->pointer_size); all this src|dst_file and confusing offsets(whatever they might mean) make me experience headache every time I need to remember how linker works and read both QEMU and SeaBIOS code to figure it out each time. That's what I'd call un-maintainable and hard to use API. > > > issue 3: > > all patching obviously needs bounds checking on QEMU side > > so it would abort early if it could corrupt memory. > > That's easy. > > > > > > > > 2. If it's going to be used for patching AML, it should assert > > > > when bios_linker_loader_add_pointer() is called if to be patched > > > > AML object is wrong and patching would corrupt AML blob. > > > > > > Hmm for example check that the patched data has > > > the expected pattern? > > yep, nothing could be done for raw tables but that should be possible > > for AML tables and if pattern is unsupported/size doesn't match > > it should abort QEMU early instead of corrupting table. > > Above all sounds reasonable. Would you like to take a stub > at it or prefer me to? It would be better if it were you. I wouldn't like to maintain it ever as it's too complex and hard to use API, which I'd use only as the last resort if there weren't any other way to implement the task at hand. > > > > > > > > > > > > It's there and we are not moving away from it, so why not > > > > > use it in more places? Or if you think it's wrong, why don't you build > > > > > something better then? We could then maybe use it for these things as > > > > > well. > > > > > > > > Yep, I think for vmgenid and even more so for nvdimm > > > > it would be better to allocate GPAs in QEMU and map backing > > > > MemoryRegions directly in QEMU. > > > > For nvdimm (main data region) > > > > we already do it using pc-dimm's GPA allocation algorithm, we also > > > > could use similar approach for nvdimm's label area and vmgenid. > > > > > > > > Here is a simple attempt to add a limited GPA allocator in high memory > > > > https://patchwork.ozlabs.org/patch/540852/ > > > > But it haven't got any comment from you and were ignored. > > > > Lets consider it and perhaps we could come up with GPA allocator > > > > that could be used for other things as well. > > > > > > For nvdimm label area, I agree passing things through > > > a 4K buffer seems inefficient. > > > > > > I'm not sure what's a better way though. > > > > > > Use 64 bit memory? Setting aside old guests such as XP, > > > does it break 32 bit guests? > > it might not work with 32bit guests, the same way as mem hotplug > > doesn't work for them unless they are PAE enabled. > > Right, I mean with PAE. I've tested it with 32-bit XP and Windows 10, they boot fine and vmgenid device is displayed as OK with buffer above 4Gb (on Win10). So at least is doesn't crash guest. I can't test more than that for 32 bit guests since utility to read vmgenid works only on Windows Server and there isn't a 32bit version of it. > > > but well that's a limitation of implementation and considering > > that storage nvdimm area is mapped at 64bit GPA it doesn't matter. > > > > > I'm really afraid of adding yet another allocator, I think you > > > underestimate the maintainance headache: it's not theoretical and is > > > already felt. > > Current maintenance headache is due to fixed handpicked > > mem layout, we can't do much with it for legacy machine > > types but with QEMU side GPA allocator we can try to switch > > to a flexible memory layout that would allocate GPA > > depending on QEMU config in a stable manner. > > So far, we didn't manage to. It seems to go in the reverse > direction were we add more and more control to let management > influence the layout. Things like alignment requirements > also tend to surface later and wreck havoc on whatever > we do. Was even there an attempt to try it before, could you point to it? The only attempt I've seen was https://patchwork.ozlabs.org/patch/540852/ but it haven't got any technical comments from you, except of 'I'm afraid that it won't work' on IRC. QEMU already has GPA allocator limited to memory hotplug AS and it has passed through 'growing' issues. What above patch proposes is to reuse already existing memory hotplug AS and maybe make its GPA allocator more generic (i.e. not tied only to pc-dimm) on top of it. It's sufficient for vmgenid use-case and a definitely much more suitable for nvdimm which already uses it for mapping main storage MemoryRegion. > > well there is maintenance headache with bios_linker as well > > due to its complexity (multiple layers of indirection) and > > it will grow when more places try to use it. > > Yep we could use it as a hack, stealing RAM and trying implement > > backwards DMA or we could be less afraid and consider > > yet another allocator which will do the job without hacks > > which should benefit QEMU in a long run (it might be not easy > > to impl. it right but if we won't even try we would be buried > > in complex hacks that 'work' for now) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And hey, if you want to use a pci device to pass the physical > > > > > > > > > address guest to host, instead of reserving > > > > > > > > > a couple of IO addresses, sure, stick it in pci config in > > > > > > > > > a vendor-specific capability, this way it'll get migrated > > > > > > > > > automatically. > > > > > > > > Could you elaborate more on this suggestion? > > > > > > > > > > > > > > I really just mean using PCI_Config operation region. > > > > > > > If you wish, I'll try to post a prototype next week. > > > > > > I don't know much about PCI but it would be interesting, > > > > > > perhaps we could use it somewhere else. > > > > > > > > > > > > However it should be checked if it works with Windows, > > > > > > for example PCI specific _DSM method is ignored by it > > > > > > if PCI device doesn't have working PCI driver bound to it. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --- > > > > > > > > > > changes since 17: > > > > > > > > > > - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > - make BAR prefetchable to make region cached as per MS spec > > > > > > > > > > - s/uuid/guid/ to match spec > > > > > > > > > > changes since 14: > > > > > > > > > > - reserve BAR resources so that Windows won't touch it > > > > > > > > > > during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > - ACPI: split VGEN device of PCI device descriptor > > > > > > > > > > and place it at PCI0 scope, so that won't be need trace its > > > > > > > > > > location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > - permit only one vmgenid to be created > > > > > > > > > > - enable BAR be mapped above 4Gb if it can't be mapped at low mem > > > > > > > > > > --- > > > > > > > > > > default-configs/i386-softmmu.mak | 1 + > > > > > > > > > > default-configs/x86_64-softmmu.mak | 1 + > > > > > > > > > > docs/specs/pci-ids.txt | 1 + > > > > > > > > > > hw/i386/acpi-build.c | 56 +++++++++++++- > > > > > > > > > > hw/misc/Makefile.objs | 1 + > > > > > > > > > > hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ > > > > > > > > > > include/hw/misc/vmgenid.h | 27 +++++++ > > > > > > > > > > include/hw/pci/pci.h | 1 + > > > > > > > > > > 8 files changed, 240 insertions(+), 2 deletions(-) > > > > > > > > > > create mode 100644 hw/misc/vmgenid.c > > > > > > > > > > create mode 100644 include/hw/misc/vmgenid.h > > > > > > > > > > > > > > > > > > > > diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak > > > > > > > > > > index b177e52..6402439 100644 > > > > > > > > > > --- a/default-configs/i386-softmmu.mak > > > > > > > > > > +++ b/default-configs/i386-softmmu.mak > > > > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > > > > CONFIG_IOAPIC=y > > > > > > > > > > CONFIG_PVPANIC=y > > > > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > > > > +CONFIG_VMGENID=y > > > > > > > > > > CONFIG_NVDIMM=y > > > > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > > > > CONFIG_XIO3130=y > > > > > > > > > > diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak > > > > > > > > > > index 6e3b312..fdac18f 100644 > > > > > > > > > > --- a/default-configs/x86_64-softmmu.mak > > > > > > > > > > +++ b/default-configs/x86_64-softmmu.mak > > > > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > > > > CONFIG_IOAPIC=y > > > > > > > > > > CONFIG_PVPANIC=y > > > > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > > > > +CONFIG_VMGENID=y > > > > > > > > > > CONFIG_NVDIMM=y > > > > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > > > > CONFIG_XIO3130=y > > > > > > > > > > diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt > > > > > > > > > > index 0adcb89..e65ecf9 100644 > > > > > > > > > > --- a/docs/specs/pci-ids.txt > > > > > > > > > > +++ b/docs/specs/pci-ids.txt > > > > > > > > > > @@ -47,6 +47,7 @@ PCI devices (other than virtio): > > > > > > > > > > 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) > > > > > > > > > > 1b36:0006 PCI Rocker Ethernet switch device > > > > > > > > > > 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) > > > > > > > > > > +1b36:0009 PCI VM-Generation device > > > > > > > > > > 1b36:000a PCI-PCI bridge (multiseat) > > > > > > > > > > > > > > > > > > > > All these devices are documented in docs/specs. > > > > > > > > > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c > > > > > > > > > > index 78758e2..0187262 100644 > > > > > > > > > > --- a/hw/i386/acpi-build.c > > > > > > > > > > +++ b/hw/i386/acpi-build.c > > > > > > > > > > @@ -44,6 +44,7 @@ > > > > > > > > > > #include "hw/acpi/tpm.h" > > > > > > > > > > #include "sysemu/tpm_backend.h" > > > > > > > > > > #include "hw/timer/mc146818rtc_regs.h" > > > > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > > > > > > > > > > > /* Supported chipsets: */ > > > > > > > > > > #include "hw/acpi/piix4.h" > > > > > > > > > > @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) > > > > > > > > > > info->applesmc_io_base = applesmc_port(); > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > +static Aml *build_vmgenid_device(uint64_t buf_paddr) > > > > > > > > > > +{ > > > > > > > > > > + Aml *dev, *pkg, *crs; > > > > > > > > > > + > > > > > > > > > > + dev = aml_device("VGEN"); > > > > > > > > > > + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); > > > > > > > > > > + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); > > > > > > > > > > + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); > > > > > > > > > > + > > > > > > > > > > + pkg = aml_package(2); > > > > > > > > > > + /* low 32 bits of UUID buffer addr */ > > > > > > > > > > + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); > > > > > > > > > > + /* high 32 bits of UUID buffer addr */ > > > > > > > > > > + aml_append(pkg, aml_int(buf_paddr >> 32)); > > > > > > > > > > + aml_append(dev, aml_name_decl("ADDR", pkg)); > > > > > > > > > > + > > > > > > > > > > + /* > > > > > > > > > > + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows > > > > > > > > > > + * displays it as "PCI RAM controller" which is marked as NO_DRV > > > > > > > > > > + * so Windows ignores VMGEN device completely and doesn't check > > > > > > > > > > + * for resource conflicts which during PCI rebalancing can lead > > > > > > > > > > + * to another PCI device claiming ignored BARs. To prevent this > > > > > > > > > > + * statically reserve resources used by VM_Gen_Counter. > > > > > > > > > > + * For more verbose comment see this commit message. > > > > > > > > > > > > > > > > > > What does "this commit message" mean? > > > > > > > > above commit message. Should I reword it to just 'see commit message' > > > > > > > > > > > > > > > > > > > > > > > > > > > + */ > > > > > > > > > > + crs = aml_resource_template(); > > > > > > > > > > + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, > > > > > > > > > > + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, > > > > > > > > > > + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, > > > > > > > > > > + VMGENID_VMGID_BUF_SIZE)); > > > > > > > > > > + aml_append(dev, aml_name_decl("_CRS", crs)); > > > > > > > > > > + return dev; > > > > > > > > > > +} > > > > > > > > > > + > > > > > > > > > > /* > > > > > > > > > > * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. > > > > > > > > > > * On i386 arch we only have two pci hosts, so we can look only for them. > > > > > > > > > > @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > if (bus) { > > > > > > > > > > + Object *vmgen; > > > > > > > > > > Aml *scope = aml_scope("PCI0"); > > > > > > > > > > /* Scan all PCI buses. Generate tables to support hotplug. */ > > > > > > > > > > build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); > > > > > > > > > > @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > > > > aml_append(scope, dev); > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > + vmgen = find_vmgneid_dev(NULL); > > > > > > > > > > + if (vmgen) { > > > > > > > > > > + PCIDevice *pdev = PCI_DEVICE(vmgen); > > > > > > > > > > + uint64_t buf_paddr = > > > > > > > > > > + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); > > > > > > > > > > + > > > > > > > > > > + if (buf_paddr != PCI_BAR_UNMAPPED) { > > > > > > > > > > + aml_append(scope, build_vmgenid_device(buf_paddr)); > > > > > > > > > > + > > > > > > > > > > + method = aml_method("\\_GPE._E00", 0, > > > > > > > > > > + AML_NOTSERIALIZED); > > > > > > > > > > + aml_append(method, > > > > > > > > > > + aml_notify(aml_name("\\_SB.PCI0.VGEN"), > > > > > > > > > > + aml_int(0x80))); > > > > > > > > > > + aml_append(ssdt, method); > > > > > > > > > > + } > > > > > > > > > > + } > > > > > > > > > > + > > > > > > > > > > aml_append(sb_scope, scope); > > > > > > > > > > } > > > > > > > > > > } > > > > > > > > > > @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, > > > > > > > > > > { > > > > > > > > > > aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); > > > > > > > > > > > > > > > > > > > > - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); > > > > > > > > > > - > > > > > > > > > > if (misc->is_piix4) { > > > > > > > > > > method = aml_method("_E01", 0, AML_NOTSERIALIZED); > > > > > > > > > > aml_append(method, > > > > > > > > > > diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs > > > > > > > > > > index d4765c2..1f05edd 100644 > > > > > > > > > > --- a/hw/misc/Makefile.objs > > > > > > > > > > +++ b/hw/misc/Makefile.objs > > > > > > > > > > @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o > > > > > > > > > > > > > > > > > > > > obj-$(CONFIG_PVPANIC) += pvpanic.o > > > > > > > > > > obj-$(CONFIG_EDU) += edu.o > > > > > > > > > > +obj-$(CONFIG_VMGENID) += vmgenid.o > > > > > > > > > > obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o > > > > > > > > > > diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c > > > > > > > > > > new file mode 100644 > > > > > > > > > > index 0000000..a2fbdfc > > > > > > > > > > --- /dev/null > > > > > > > > > > +++ b/hw/misc/vmgenid.c > > > > > > > > > > @@ -0,0 +1,154 @@ > > > > > > > > > > +/* > > > > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > > > > + * > > > > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > > > > + * > > > > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > > > > + * > > > > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > > > > + * > > > > > > > > > > + */ > > > > > > > > > > + > > > > > > > > > > +#include "hw/i386/pc.h" > > > > > > > > > > +#include "hw/pci/pci.h" > > > > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > +#include "hw/acpi/acpi.h" > > > > > > > > > > +#include "qapi/visitor.h" > > > > > > > > > > + > > > > > > > > > > +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) > > > > > > > > > > + > > > > > > > > > > +typedef struct VmGenIdState { > > > > > > > > > > + PCIDevice parent_obj; > > > > > > > > > > + MemoryRegion iomem; > > > > > > > > > > + union { > > > > > > > > > > + uint8_t guid[16]; > > > > > > > > > > + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; > > > > > > > > > > + }; > > > > > > > > > > + bool guid_set; > > > > > > > > > > +} VmGenIdState; > > > > > > > > > > + > > > > > > > > > > +Object *find_vmgneid_dev(Error **errp) > > > > > > > > > > +{ > > > > > > > > > > + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); > > > > > > > > > > + if (!obj) { > > > > > > > > > > + error_setg(errp, VMGENID_DEVICE " is not found"); > > > > > > > > > > + } > > > > > > > > > > + return obj; > > > > > > > > > > +} > > > > > > > > > > + > > > > > > > > > > +static void vmgenid_update_guest(VmGenIdState *s) > > > > > > > > > > +{ > > > > > > > > > > + Object *acpi_obj; > > > > > > > > > > + void *ptr = memory_region_get_ram_ptr(&s->iomem); > > > > > > > > > > + > > > > > > > > > > + memcpy(ptr, &s->guid, sizeof(s->guid)); > > > > > > > > > > + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); > > > > > > > > > > + > > > > > > > > > > + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); > > > > > > > > > > + if (acpi_obj) { > > > > > > > > > > + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); > > > > > > > > > > + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); > > > > > > > > > > + ACPIREGS *acpi_regs = adevc->regs(adev); > > > > > > > > > > + > > > > > > > > > > + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ > > > > > > > > > > + acpi_update_sci(acpi_regs, adevc->sci(adev)); > > > > > > > > > > + } > > > > > > > > > > +} > > > > > > > > > > + > > > > > > > > > > +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) > > > > > > > > > > +{ > > > > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > > > > + > > > > > > > > > > + if (qemu_uuid_parse(value, s->guid) < 0) { > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID > > > > > > > > > > + "': Failed to parse GUID string: %s", > > > > > > > > > > + object_get_typename(OBJECT(s)), > > > > > > > > > > + value); > > > > > > > > > > + return; > > > > > > > > > > + } > > > > > > > > > > + > > > > > > > > > > + s->guid_set = true; > > > > > > > > > > + vmgenid_update_guest(s); > > > > > > > > > > +} > > > > > > > > > > + > > > > > > > > > > +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, > > > > > > > > > > + const char *name, Error **errp) > > > > > > > > > > +{ > > > > > > > > > > + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); > > > > > > > > > > + > > > > > > > > > > + if (value == PCI_BAR_UNMAPPED) { > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", > > > > > > > > > > + object_get_typename(OBJECT(obj))); > > > > > > > > > > + return; > > > > > > > > > > + } > > > > > > > > > > + visit_type_int(v, &value, name, errp); > > > > > > > > > > +} > > > > > > > > > > + > > > > > > > > > > +static void vmgenid_initfn(Object *obj) > > > > > > > > > > +{ > > > > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > > > > + > > > > > > > > > > + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), > > > > > > > > > > + &error_abort); > > > > > > > > > > + > > > > > > > > > > + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); > > > > > > > > > > + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", > > > > > > > > > > + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); > > > > > > > > > > +} > > > > > > > > > > + > > > > > > > > > > + > > > > > > > > > > +static void vmgenid_realize(PCIDevice *dev, Error **errp) > > > > > > > > > > +{ > > > > > > > > > > + VmGenIdState *s = VMGENID(dev); > > > > > > > > > > + bool ambiguous = false; > > > > > > > > > > + > > > > > > > > > > + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); > > > > > > > > > > + if (ambiguous) { > > > > > > > > > > + error_setg(errp, "no more than one " VMGENID_DEVICE > > > > > > > > > > + " device is permitted"); > > > > > > > > > > + return; > > > > > > > > > > + } > > > > > > > > > > + > > > > > > > > > > + if (!s->guid_set) { > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", > > > > > > > > > > + object_get_typename(OBJECT(s))); > > > > > > > > > > + return; > > > > > > > > > > + } > > > > > > > > > > + > > > > > > > > > > + vmstate_register_ram(&s->iomem, DEVICE(s)); > > > > > > > > > > + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, > > > > > > > > > > + PCI_BASE_ADDRESS_MEM_PREFETCH | > > > > > > > > > > + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > > > > > > > > > > + &s->iomem); > > > > > > > > > > + return; > > > > > > > > > > +} > > > > > > > > > > + > > > > > > > > > > +static void vmgenid_class_init(ObjectClass *klass, void *data) > > > > > > > > > > +{ > > > > > > > > > > + DeviceClass *dc = DEVICE_CLASS(klass); > > > > > > > > > > + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); > > > > > > > > > > + > > > > > > > > > > + set_bit(DEVICE_CATEGORY_MISC, dc->categories); > > > > > > > > > > + dc->hotpluggable = false; > > > > > > > > > > + k->realize = vmgenid_realize; > > > > > > > > > > + k->vendor_id = PCI_VENDOR_ID_REDHAT; > > > > > > > > > > + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; > > > > > > > > > > + k->class_id = PCI_CLASS_MEMORY_RAM; > > > > > > > > > > +} > > > > > > > > > > + > > > > > > > > > > +static const TypeInfo vmgenid_device_info = { > > > > > > > > > > + .name = VMGENID_DEVICE, > > > > > > > > > > + .parent = TYPE_PCI_DEVICE, > > > > > > > > > > + .instance_size = sizeof(VmGenIdState), > > > > > > > > > > + .instance_init = vmgenid_initfn, > > > > > > > > > > + .class_init = vmgenid_class_init, > > > > > > > > > > +}; > > > > > > > > > > + > > > > > > > > > > +static void vmgenid_register_types(void) > > > > > > > > > > +{ > > > > > > > > > > + type_register_static(&vmgenid_device_info); > > > > > > > > > > +} > > > > > > > > > > + > > > > > > > > > > +type_init(vmgenid_register_types) > > > > > > > > > > diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h > > > > > > > > > > new file mode 100644 > > > > > > > > > > index 0000000..b90882c > > > > > > > > > > --- /dev/null > > > > > > > > > > +++ b/include/hw/misc/vmgenid.h > > > > > > > > > > @@ -0,0 +1,27 @@ > > > > > > > > > > +/* > > > > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > > > > + * > > > > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > > > > + * > > > > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > > > > + * > > > > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > > > > + * > > > > > > > > > > + */ > > > > > > > > > > + > > > > > > > > > > +#ifndef HW_MISC_VMGENID_H > > > > > > > > > > +#define HW_MISC_VMGENID_H > > > > > > > > > > + > > > > > > > > > > +#include "qom/object.h" > > > > > > > > > > + > > > > > > > > > > +#define VMGENID_DEVICE "vmgenid" > > > > > > > > > > +#define VMGENID_GUID "guid" > > > > > > > > > > +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" > > > > > > > > > > +#define VMGENID_VMGID_BUF_SIZE 0x1000 > > > > > > > > > > +#define VMGENID_VMGID_BUF_BAR 0 > > > > > > > > > > + > > > > > > > > > > +Object *find_vmgneid_dev(Error **errp); > > > > > > > > > > + > > > > > > > > > > +#endif > > > > > > > > > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > > > > > > > > > > index dedf277..f4c9d48 100644 > > > > > > > > > > --- a/include/hw/pci/pci.h > > > > > > > > > > +++ b/include/hw/pci/pci.h > > > > > > > > > > @@ -94,6 +94,7 @@ > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b > > > > > > > > > > +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 > > > > > > > > > > > > > > > > > > > > #define FMT_PCIBUS PRIx64 > > > > > > > > > > -- > > > > > > > > > > 1.8.3.1 > > > > > >
On Thu, Feb 11, 2016 at 04:16:05PM +0100, Igor Mammedov wrote: > On Tue, 9 Feb 2016 14:17:44 +0200 > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > On Tue, Feb 09, 2016 at 11:46:08AM +0100, Igor Mammedov wrote: > > > > So the linker interface solves this rather neatly: > > > > bios allocates memory, bios passes memory map to guest. > > > > Served us well for several years without need for extensions, > > > > and it does solve the VM GEN ID problem, even though > > > > 1. it was never designed for huge areas like nvdimm seems to want to use > > > > 2. we might want to add a new 64 bit flag to avoid touching low memory > > > linker interface is fine for some readonly data, like ACPI tables > > > especially fixed tables not so for AML ones is one wants to patch it. > > > > > > However now when you want to use it for other purposes you start > > > adding extensions and other guest->QEMU channels to communicate > > > patching info back. > > > It steals guest's memory which is also not nice and doesn't scale well. > > > > This is an argument I don't get. memory is memory. call it guest memory > > or RAM backed PCI BAR - same thing. MMIO is cheaper of course > > but much slower. > > > > ... > It however matters for user, he pays for guest with XXX RAM but gets less > than that. And that will be getting worse as a number of such devices > increases. > > > > > OK fine, but returning PCI BAR address to guest is wrong. > > > > How about reading it from ACPI then? Is it really > > > > broken unless there's *also* a driver? > > > I don't get question, MS Spec requires address (ADDR method), > > > and it's read by ACPI (AML). > > > > You were unhappy about DMA into guest memory. > > As a replacement for DMA, we could have AML read from > > e.g. PCI and write into RAM. > > This way we don't need to pass address to QEMU. > That sounds better as it saves us from allocation of IO port > and QEMU don't need to write into guest memory, the only question is > if PCI_Config opregion would work with driver-less PCI device. Or PCI BAR for that reason. I don't know for sure. > > And it's still pretty much not test-able since it would require > fully running OSPM to execute AML side. AML is not testable, but that's nothing new. You can test reading from PCI. > > > > > As for working PCI_Config OpRegion without driver, I haven't tried, > > > but I wouldn't be surprised if it doesn't, taking in account that > > > MS introduced _DSM doesn't. > > > > > > > > > > > > > > > > > > Just compare with a graphics card design, where on device memory > > > > > > > is mapped directly at some GPA not wasting RAM that guest could > > > > > > > use for other tasks. > > > > > > > > > > > > This might have been true 20 years ago. Most modern cards do DMA. > > > > > > > > > > Modern cards, with it's own RAM, map its VRAM in address space directly > > > > > and allow users use it (GEM API). So they do not waste conventional RAM. > > > > > For example NVIDIA VRAM is mapped as PCI BARs the same way like in this > > > > > series (even PCI class id is the same) > > > > > > > > Don't know enough about graphics really, I'm not sure how these are > > > > relevant. NICs and disks certainly do DMA. And virtio gl seems to > > > > mostly use guest RAM, not on card RAM. > > > > > > > > > > > VMGENID and NVDIMM use-cases look to me exactly the same, i.e. > > > > > > > instead of consuming guest's RAM they should be mapped at > > > > > > > some GPA and their memory accessed directly. > > > > > > > > > > > > VMGENID is tied to a spec that rather arbitrarily asks for a fixed > > > > > > address. This breaks the straight-forward approach of using a > > > > > > rebalanceable PCI BAR. > > > > > > > > > > For PCI rebalance to work on Windows, one has to provide working PCI driver > > > > > otherwise OS will ignore it when rebalancing happens and > > > > > might map something else over ignored BAR. > > > > > > > > Does it disable the BAR then? Or just move it elsewhere? > > > it doesn't, it just blindly ignores BARs existence and maps BAR of > > > another device with driver over it. > > > > Interesting. On classical PCI this is a forbidden configuration. > > Maybe we do something that confuses windows? > > Could you tell me how to reproduce this behaviour? > #cat > t << EOF > pci_update_mappings_del > pci_update_mappings_add > EOF > > #./x86_64-softmmu/qemu-system-x86_64 -snapshot -enable-kvm -snapshot \ > -monitor unix:/tmp/m,server,nowait -device pci-bridge,chassis_nr=1 \ > -boot menu=on -m 4G -trace events=t ws2012r2x64dc.img \ > -device ivshmem,id=foo,size=2M,shm,bus=pci.1,addr=01 > > wait till OS boots, note BARs programmed for ivshmem > in my case it was > 01:01.0 0,0xfe800000+0x100 > then execute script and watch pci_update_mappings* trace events > > # for i in $(seq 3 18); do printf -- "device_add e1000,bus=pci.1,addr=%x\n" $i | nc -U /tmp/m; sleep 5; done; > > hotplugging e1000,bus=pci.1,addr=12 triggers rebalancing where > Windows unmaps all BARs of nics on bridge but doesn't touch ivshmem > and then programs new BARs, where: > pci_update_mappings_add d=0x7fa02ff0cf90 01:11.0 0,0xfe800000+0x20000 > creates overlapping BAR with ivshmem Thanks! We need to figure this out because currently this does not work properly (or maybe it works, but merely by chance). Me and Marcel will play with this. > > > > > > > > > > > > > > > > > > > > In that case NVDIMM could even map whole label area and > > > > > > > significantly simplify QEMU<->OSPM protocol that currently > > > > > > > serializes that data through a 4K page. > > > > > > > There is also performance issue with buffer allocated in RAM, > > > > > > > because DMA adds unnecessary copying step when data could > > > > > > > be read/written directly of NVDIMM. > > > > > > > It might be no very important for _DSM interface but when it > > > > > > > comes to supporting block mode it can become an issue. > > > > > > > > > > > > So for NVDIMM, presumably it will have code access PCI BAR properly, so > > > > > > it's guaranteed to work across BAR rebalancing. > > > > > > Would that address the performance issue? > > > > > > > > > > it would if rebalancing were to account for driverless PCI device BARs, > > > > > but it doesn't hence such BARs need to be statically pinned > > > > > at place where BIOS put them at start up. > > > > > I'm also not sure that PCIConfig operation region would work > > > > > on Windows without loaded driver (similar to _DSM case). > > > > > > > > > > > > > > > > > Above points make ACPI patching approach not robust and fragile > > > > > > > and hard to maintain. > > > > > > > > > > > > Wrt GEN ID these are all kind of subjective though. I especially don't > > > > > > get what appears your general dislike of the linker host/guest > > > > > > interface. > > > > > Besides technical issues general dislike is just what I've written > > > > > "not robust and fragile" bios_linker_loader_add_pointer() interface. > > > > > > > > > > to make it less fragile: > > > > > 1. it should be impossible to corrupt memory or patch wrong address. > > > > > current impl. silently relies on value referenced by 'pointer' argument > > > > > and to figure that out one has to read linker code on BIOS side. > > > > > That could be easily set wrong and slip through review. > > > > > > > > That's an API issue, it seemed like a good idea but I guess > > > > it confuses people. Would you be happier using an offset > > > > instead of a pointer? > > > offset is better and it would be better if it were saying > > > which offset it is (i.e. relative to what) > > > > > > Start of table, right? > not sure, to me it looks like start of a blob and not the table Right that's what I meant. > > > > > > > > > > > API shouldn't rely on the caller setting value pointed by that argument. > > > > > > > > I couldn't parse that one. Care suggesting a cleaner API for linker? > > > here is current API signature: > > > > > > bios_linker_loader_add_pointer(GArray *linker, > > > const char *dest_file, > > > const char *src_file, > > > GArray *table, void *pointer, > > > uint8_t pointer_size) > > > > > > issue 1: > > > where 'pointer' is a real pointer pointing inside 'table' and API > > > calculates offset underhood: > > > offset = (gchar *)pointer - table->data; > > > and puts it in ADD_POINTER command. > > > > > > it's easy to get wrong offset if 'pointer' is not from 'table'. > > > > OK, replace that with table_offset? > blob_offset? > > also s/table/blob/ OK. > > > > > issue 2: > > > 'pointer' points to another offset of size 'pointer_size' in 'table' > > > blob, that means that whoever composes blob, has to aware of > > > it and fill correct value there which is possible to do right > > > if one looks inside of SeaBIOS part of linker interface. > > > Which is easy to forget and then one has to deal with mess > > > caused by random memory corruption. > > > > > > bios_linker_loader_add_pointer() and corresponding > > > ADD_POINTER command should take this second offset as argument > > > and do no require 'table' be pre-filled with it or > > > in worst case if of extending ADD_POINTER command is problematic > > > bios_linker_loader_add_pointer() should still take > > > the second offset and patch 'table' itself so that 'table' composer > > > don't have to worry about it. > > > > This one I don't understand. What's the second pointer you > > are talking about? > ha, see even the author already has absolutely no clue how linker works > and about what offsets are relative to. > see SeaBIOS romfile_loader_add_pointer(): > ... > memcpy(&pointer, dest_file->data + offset, entry->pointer_size); > here is the second offset ^^^^^^^^^^^^^ It's the same offset in the entry. struct { char pointer_dest_file[ROMFILE_LOADER_FILESZ]; char pointer_src_file[ROMFILE_LOADER_FILESZ]; u32 pointer_offset; u8 pointer_size; }; > it should be properly named field of ADD_POINTER command and > the not part of data blob. > > pointer = le64_to_cpu(pointer); > pointer += (unsigned long)src_file->data; > pointer = cpu_to_le64(pointer); > memcpy(dest_file->data + offset, &pointer, entry->pointer_size); > > all this src|dst_file and confusing offsets(whatever they might mean) > make me experience headache every time I need to remember how linker > works and read both QEMU and SeaBIOS code to figure it out each time. > That's what I'd call un-maintainable and hard to use API. Tight, there's lack of documentation. It's my fault, so let's fix it. It's an API issue, nothing to do with ABI. > > > > > > issue 3: > > > all patching obviously needs bounds checking on QEMU side > > > so it would abort early if it could corrupt memory. > > > > That's easy. > > > > > > > > > > > 2. If it's going to be used for patching AML, it should assert > > > > > when bios_linker_loader_add_pointer() is called if to be patched > > > > > AML object is wrong and patching would corrupt AML blob. > > > > > > > > Hmm for example check that the patched data has > > > > the expected pattern? > > > yep, nothing could be done for raw tables but that should be possible > > > for AML tables and if pattern is unsupported/size doesn't match > > > it should abort QEMU early instead of corrupting table. > > > > Above all sounds reasonable. Would you like to take a stub > > at it or prefer me to? > It would be better if it were you. > > I wouldn't like to maintain it ever as it's too complex and hard to use API, > which I'd use only as the last resort if there weren't any other way > to implement the task at hand. Sorry, I don't get it. You don't like the API, write a better one for an existing ABI. If you prefer waiting for me to fix it, that's fine too but no guarantees that you will like the new one or when it will happen. Look there have been 1 change (a bigfix for alignment) in several years since we added the linker. We don't maintain any compatiblity flags around it *at all*. It might have a hard to use API but that is the definition of easy to maintain. You are pushing allocating memory host side as an alternative, what happened there is the reverse. A ton of changes and pain all the way, and we get to maintain a bag of compat hacks for old machine types. You say we finally know what we are doing and won't have to change it any more. I'm not convinced. > > > > > > > > > > > > > > > It's there and we are not moving away from it, so why not > > > > > > use it in more places? Or if you think it's wrong, why don't you build > > > > > > something better then? We could then maybe use it for these things as > > > > > > well. > > > > > > > > > > Yep, I think for vmgenid and even more so for nvdimm > > > > > it would be better to allocate GPAs in QEMU and map backing > > > > > MemoryRegions directly in QEMU. > > > > > For nvdimm (main data region) > > > > > we already do it using pc-dimm's GPA allocation algorithm, we also > > > > > could use similar approach for nvdimm's label area and vmgenid. > > > > > > > > > > Here is a simple attempt to add a limited GPA allocator in high memory > > > > > https://patchwork.ozlabs.org/patch/540852/ > > > > > But it haven't got any comment from you and were ignored. > > > > > Lets consider it and perhaps we could come up with GPA allocator > > > > > that could be used for other things as well. > > > > > > > > For nvdimm label area, I agree passing things through > > > > a 4K buffer seems inefficient. > > > > > > > > I'm not sure what's a better way though. > > > > > > > > Use 64 bit memory? Setting aside old guests such as XP, > > > > does it break 32 bit guests? > > > it might not work with 32bit guests, the same way as mem hotplug > > > doesn't work for them unless they are PAE enabled. > > > > Right, I mean with PAE. > I've tested it with 32-bit XP and Windows 10, they boot fine and > vmgenid device is displayed as OK with buffer above 4Gb (on Win10). > So at least is doesn't crash guest. > I can't test more than that for 32 bit guests since utility > to read vmgenid works only on Windows Server and there isn't > a 32bit version of it. > > > > > > but well that's a limitation of implementation and considering > > > that storage nvdimm area is mapped at 64bit GPA it doesn't matter. > > > > > > > I'm really afraid of adding yet another allocator, I think you > > > > underestimate the maintainance headache: it's not theoretical and is > > > > already felt. > > > Current maintenance headache is due to fixed handpicked > > > mem layout, we can't do much with it for legacy machine > > > types but with QEMU side GPA allocator we can try to switch > > > to a flexible memory layout that would allocate GPA > > > depending on QEMU config in a stable manner. > > > > So far, we didn't manage to. It seems to go in the reverse > > direction were we add more and more control to let management > > influence the layout. Things like alignment requirements > > also tend to surface later and wreck havoc on whatever > > we do. > Was even there an attempt to try it before, could you point to it? Look at the mess we have with the existing allocator. As a way to fix this unmaintainable mess, what I see is suggestions to drop old machine types so people have to reinstall guests. This does not inspire confidence. > The only attempt I've seen was https://patchwork.ozlabs.org/patch/540852/ > but it haven't got any technical comments from you, > except of 'I'm afraid that it won't work' on IRC. > > QEMU already has GPA allocator limited to memory hotplug AS > and it has passed through 'growing' issues. What above patch > proposes is to reuse already existing memory hotplug AS and > maybe make its GPA allocator more generic (i.e. not tied > only to pc-dimm) on top of it. You say we finally know what we are doing and won't have to change it any more. I'm not convinced. > > It's sufficient for vmgenid use-case and a definitely > much more suitable for nvdimm which already uses it for mapping > main storage MemoryRegion. > > > > well there is maintenance headache with bios_linker as well > > > due to its complexity (multiple layers of indirection) and > > > it will grow when more places try to use it. > > > Yep we could use it as a hack, stealing RAM and trying implement > > > backwards DMA or we could be less afraid and consider > > > yet another allocator which will do the job without hacks > > > which should benefit QEMU in a long run (it might be not easy > > > to impl. it right but if we won't even try we would be buried > > > in complex hacks that 'work' for now) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And hey, if you want to use a pci device to pass the physical > > > > > > > > > > address guest to host, instead of reserving > > > > > > > > > > a couple of IO addresses, sure, stick it in pci config in > > > > > > > > > > a vendor-specific capability, this way it'll get migrated > > > > > > > > > > automatically. > > > > > > > > > Could you elaborate more on this suggestion? > > > > > > > > > > > > > > > > I really just mean using PCI_Config operation region. > > > > > > > > If you wish, I'll try to post a prototype next week. > > > > > > > I don't know much about PCI but it would be interesting, > > > > > > > perhaps we could use it somewhere else. > > > > > > > > > > > > > > However it should be checked if it works with Windows, > > > > > > > for example PCI specific _DSM method is ignored by it > > > > > > > if PCI device doesn't have working PCI driver bound to it. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --- > > > > > > > > > > > changes since 17: > > > > > > > > > > > - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > > - make BAR prefetchable to make region cached as per MS spec > > > > > > > > > > > - s/uuid/guid/ to match spec > > > > > > > > > > > changes since 14: > > > > > > > > > > > - reserve BAR resources so that Windows won't touch it > > > > > > > > > > > during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > > - ACPI: split VGEN device of PCI device descriptor > > > > > > > > > > > and place it at PCI0 scope, so that won't be need trace its > > > > > > > > > > > location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > > - permit only one vmgenid to be created > > > > > > > > > > > - enable BAR be mapped above 4Gb if it can't be mapped at low mem > > > > > > > > > > > --- > > > > > > > > > > > default-configs/i386-softmmu.mak | 1 + > > > > > > > > > > > default-configs/x86_64-softmmu.mak | 1 + > > > > > > > > > > > docs/specs/pci-ids.txt | 1 + > > > > > > > > > > > hw/i386/acpi-build.c | 56 +++++++++++++- > > > > > > > > > > > hw/misc/Makefile.objs | 1 + > > > > > > > > > > > hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ > > > > > > > > > > > include/hw/misc/vmgenid.h | 27 +++++++ > > > > > > > > > > > include/hw/pci/pci.h | 1 + > > > > > > > > > > > 8 files changed, 240 insertions(+), 2 deletions(-) > > > > > > > > > > > create mode 100644 hw/misc/vmgenid.c > > > > > > > > > > > create mode 100644 include/hw/misc/vmgenid.h > > > > > > > > > > > > > > > > > > > > > > diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak > > > > > > > > > > > index b177e52..6402439 100644 > > > > > > > > > > > --- a/default-configs/i386-softmmu.mak > > > > > > > > > > > +++ b/default-configs/i386-softmmu.mak > > > > > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > > > > > CONFIG_IOAPIC=y > > > > > > > > > > > CONFIG_PVPANIC=y > > > > > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > > > > > +CONFIG_VMGENID=y > > > > > > > > > > > CONFIG_NVDIMM=y > > > > > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > > > > > CONFIG_XIO3130=y > > > > > > > > > > > diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak > > > > > > > > > > > index 6e3b312..fdac18f 100644 > > > > > > > > > > > --- a/default-configs/x86_64-softmmu.mak > > > > > > > > > > > +++ b/default-configs/x86_64-softmmu.mak > > > > > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > > > > > CONFIG_IOAPIC=y > > > > > > > > > > > CONFIG_PVPANIC=y > > > > > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > > > > > +CONFIG_VMGENID=y > > > > > > > > > > > CONFIG_NVDIMM=y > > > > > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > > > > > CONFIG_XIO3130=y > > > > > > > > > > > diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt > > > > > > > > > > > index 0adcb89..e65ecf9 100644 > > > > > > > > > > > --- a/docs/specs/pci-ids.txt > > > > > > > > > > > +++ b/docs/specs/pci-ids.txt > > > > > > > > > > > @@ -47,6 +47,7 @@ PCI devices (other than virtio): > > > > > > > > > > > 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) > > > > > > > > > > > 1b36:0006 PCI Rocker Ethernet switch device > > > > > > > > > > > 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) > > > > > > > > > > > +1b36:0009 PCI VM-Generation device > > > > > > > > > > > 1b36:000a PCI-PCI bridge (multiseat) > > > > > > > > > > > > > > > > > > > > > > All these devices are documented in docs/specs. > > > > > > > > > > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c > > > > > > > > > > > index 78758e2..0187262 100644 > > > > > > > > > > > --- a/hw/i386/acpi-build.c > > > > > > > > > > > +++ b/hw/i386/acpi-build.c > > > > > > > > > > > @@ -44,6 +44,7 @@ > > > > > > > > > > > #include "hw/acpi/tpm.h" > > > > > > > > > > > #include "sysemu/tpm_backend.h" > > > > > > > > > > > #include "hw/timer/mc146818rtc_regs.h" > > > > > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > > > > > > > > > > > > > /* Supported chipsets: */ > > > > > > > > > > > #include "hw/acpi/piix4.h" > > > > > > > > > > > @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) > > > > > > > > > > > info->applesmc_io_base = applesmc_port(); > > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > > > +static Aml *build_vmgenid_device(uint64_t buf_paddr) > > > > > > > > > > > +{ > > > > > > > > > > > + Aml *dev, *pkg, *crs; > > > > > > > > > > > + > > > > > > > > > > > + dev = aml_device("VGEN"); > > > > > > > > > > > + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); > > > > > > > > > > > + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); > > > > > > > > > > > + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); > > > > > > > > > > > + > > > > > > > > > > > + pkg = aml_package(2); > > > > > > > > > > > + /* low 32 bits of UUID buffer addr */ > > > > > > > > > > > + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); > > > > > > > > > > > + /* high 32 bits of UUID buffer addr */ > > > > > > > > > > > + aml_append(pkg, aml_int(buf_paddr >> 32)); > > > > > > > > > > > + aml_append(dev, aml_name_decl("ADDR", pkg)); > > > > > > > > > > > + > > > > > > > > > > > + /* > > > > > > > > > > > + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows > > > > > > > > > > > + * displays it as "PCI RAM controller" which is marked as NO_DRV > > > > > > > > > > > + * so Windows ignores VMGEN device completely and doesn't check > > > > > > > > > > > + * for resource conflicts which during PCI rebalancing can lead > > > > > > > > > > > + * to another PCI device claiming ignored BARs. To prevent this > > > > > > > > > > > + * statically reserve resources used by VM_Gen_Counter. > > > > > > > > > > > + * For more verbose comment see this commit message. > > > > > > > > > > > > > > > > > > > > What does "this commit message" mean? > > > > > > > > > above commit message. Should I reword it to just 'see commit message' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + */ > > > > > > > > > > > + crs = aml_resource_template(); > > > > > > > > > > > + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, > > > > > > > > > > > + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, > > > > > > > > > > > + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, > > > > > > > > > > > + VMGENID_VMGID_BUF_SIZE)); > > > > > > > > > > > + aml_append(dev, aml_name_decl("_CRS", crs)); > > > > > > > > > > > + return dev; > > > > > > > > > > > +} > > > > > > > > > > > + > > > > > > > > > > > /* > > > > > > > > > > > * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. > > > > > > > > > > > * On i386 arch we only have two pci hosts, so we can look only for them. > > > > > > > > > > > @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > > > if (bus) { > > > > > > > > > > > + Object *vmgen; > > > > > > > > > > > Aml *scope = aml_scope("PCI0"); > > > > > > > > > > > /* Scan all PCI buses. Generate tables to support hotplug. */ > > > > > > > > > > > build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); > > > > > > > > > > > @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > > > > > aml_append(scope, dev); > > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > > > + vmgen = find_vmgneid_dev(NULL); > > > > > > > > > > > + if (vmgen) { > > > > > > > > > > > + PCIDevice *pdev = PCI_DEVICE(vmgen); > > > > > > > > > > > + uint64_t buf_paddr = > > > > > > > > > > > + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); > > > > > > > > > > > + > > > > > > > > > > > + if (buf_paddr != PCI_BAR_UNMAPPED) { > > > > > > > > > > > + aml_append(scope, build_vmgenid_device(buf_paddr)); > > > > > > > > > > > + > > > > > > > > > > > + method = aml_method("\\_GPE._E00", 0, > > > > > > > > > > > + AML_NOTSERIALIZED); > > > > > > > > > > > + aml_append(method, > > > > > > > > > > > + aml_notify(aml_name("\\_SB.PCI0.VGEN"), > > > > > > > > > > > + aml_int(0x80))); > > > > > > > > > > > + aml_append(ssdt, method); > > > > > > > > > > > + } > > > > > > > > > > > + } > > > > > > > > > > > + > > > > > > > > > > > aml_append(sb_scope, scope); > > > > > > > > > > > } > > > > > > > > > > > } > > > > > > > > > > > @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, > > > > > > > > > > > { > > > > > > > > > > > aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); > > > > > > > > > > > > > > > > > > > > > > - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); > > > > > > > > > > > - > > > > > > > > > > > if (misc->is_piix4) { > > > > > > > > > > > method = aml_method("_E01", 0, AML_NOTSERIALIZED); > > > > > > > > > > > aml_append(method, > > > > > > > > > > > diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs > > > > > > > > > > > index d4765c2..1f05edd 100644 > > > > > > > > > > > --- a/hw/misc/Makefile.objs > > > > > > > > > > > +++ b/hw/misc/Makefile.objs > > > > > > > > > > > @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o > > > > > > > > > > > > > > > > > > > > > > obj-$(CONFIG_PVPANIC) += pvpanic.o > > > > > > > > > > > obj-$(CONFIG_EDU) += edu.o > > > > > > > > > > > +obj-$(CONFIG_VMGENID) += vmgenid.o > > > > > > > > > > > obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o > > > > > > > > > > > diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c > > > > > > > > > > > new file mode 100644 > > > > > > > > > > > index 0000000..a2fbdfc > > > > > > > > > > > --- /dev/null > > > > > > > > > > > +++ b/hw/misc/vmgenid.c > > > > > > > > > > > @@ -0,0 +1,154 @@ > > > > > > > > > > > +/* > > > > > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > > > > > + * > > > > > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > > > > > + * > > > > > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > > > > > + * > > > > > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > > > > > + * > > > > > > > > > > > + */ > > > > > > > > > > > + > > > > > > > > > > > +#include "hw/i386/pc.h" > > > > > > > > > > > +#include "hw/pci/pci.h" > > > > > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > > +#include "hw/acpi/acpi.h" > > > > > > > > > > > +#include "qapi/visitor.h" > > > > > > > > > > > + > > > > > > > > > > > +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) > > > > > > > > > > > + > > > > > > > > > > > +typedef struct VmGenIdState { > > > > > > > > > > > + PCIDevice parent_obj; > > > > > > > > > > > + MemoryRegion iomem; > > > > > > > > > > > + union { > > > > > > > > > > > + uint8_t guid[16]; > > > > > > > > > > > + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; > > > > > > > > > > > + }; > > > > > > > > > > > + bool guid_set; > > > > > > > > > > > +} VmGenIdState; > > > > > > > > > > > + > > > > > > > > > > > +Object *find_vmgneid_dev(Error **errp) > > > > > > > > > > > +{ > > > > > > > > > > > + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); > > > > > > > > > > > + if (!obj) { > > > > > > > > > > > + error_setg(errp, VMGENID_DEVICE " is not found"); > > > > > > > > > > > + } > > > > > > > > > > > + return obj; > > > > > > > > > > > +} > > > > > > > > > > > + > > > > > > > > > > > +static void vmgenid_update_guest(VmGenIdState *s) > > > > > > > > > > > +{ > > > > > > > > > > > + Object *acpi_obj; > > > > > > > > > > > + void *ptr = memory_region_get_ram_ptr(&s->iomem); > > > > > > > > > > > + > > > > > > > > > > > + memcpy(ptr, &s->guid, sizeof(s->guid)); > > > > > > > > > > > + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); > > > > > > > > > > > + > > > > > > > > > > > + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); > > > > > > > > > > > + if (acpi_obj) { > > > > > > > > > > > + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); > > > > > > > > > > > + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); > > > > > > > > > > > + ACPIREGS *acpi_regs = adevc->regs(adev); > > > > > > > > > > > + > > > > > > > > > > > + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ > > > > > > > > > > > + acpi_update_sci(acpi_regs, adevc->sci(adev)); > > > > > > > > > > > + } > > > > > > > > > > > +} > > > > > > > > > > > + > > > > > > > > > > > +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) > > > > > > > > > > > +{ > > > > > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > > > > > + > > > > > > > > > > > + if (qemu_uuid_parse(value, s->guid) < 0) { > > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID > > > > > > > > > > > + "': Failed to parse GUID string: %s", > > > > > > > > > > > + object_get_typename(OBJECT(s)), > > > > > > > > > > > + value); > > > > > > > > > > > + return; > > > > > > > > > > > + } > > > > > > > > > > > + > > > > > > > > > > > + s->guid_set = true; > > > > > > > > > > > + vmgenid_update_guest(s); > > > > > > > > > > > +} > > > > > > > > > > > + > > > > > > > > > > > +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, > > > > > > > > > > > + const char *name, Error **errp) > > > > > > > > > > > +{ > > > > > > > > > > > + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); > > > > > > > > > > > + > > > > > > > > > > > + if (value == PCI_BAR_UNMAPPED) { > > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", > > > > > > > > > > > + object_get_typename(OBJECT(obj))); > > > > > > > > > > > + return; > > > > > > > > > > > + } > > > > > > > > > > > + visit_type_int(v, &value, name, errp); > > > > > > > > > > > +} > > > > > > > > > > > + > > > > > > > > > > > +static void vmgenid_initfn(Object *obj) > > > > > > > > > > > +{ > > > > > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > > > > > + > > > > > > > > > > > + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), > > > > > > > > > > > + &error_abort); > > > > > > > > > > > + > > > > > > > > > > > + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); > > > > > > > > > > > + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", > > > > > > > > > > > + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); > > > > > > > > > > > +} > > > > > > > > > > > + > > > > > > > > > > > + > > > > > > > > > > > +static void vmgenid_realize(PCIDevice *dev, Error **errp) > > > > > > > > > > > +{ > > > > > > > > > > > + VmGenIdState *s = VMGENID(dev); > > > > > > > > > > > + bool ambiguous = false; > > > > > > > > > > > + > > > > > > > > > > > + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); > > > > > > > > > > > + if (ambiguous) { > > > > > > > > > > > + error_setg(errp, "no more than one " VMGENID_DEVICE > > > > > > > > > > > + " device is permitted"); > > > > > > > > > > > + return; > > > > > > > > > > > + } > > > > > > > > > > > + > > > > > > > > > > > + if (!s->guid_set) { > > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", > > > > > > > > > > > + object_get_typename(OBJECT(s))); > > > > > > > > > > > + return; > > > > > > > > > > > + } > > > > > > > > > > > + > > > > > > > > > > > + vmstate_register_ram(&s->iomem, DEVICE(s)); > > > > > > > > > > > + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, > > > > > > > > > > > + PCI_BASE_ADDRESS_MEM_PREFETCH | > > > > > > > > > > > + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > > > > > > > > > > > + &s->iomem); > > > > > > > > > > > + return; > > > > > > > > > > > +} > > > > > > > > > > > + > > > > > > > > > > > +static void vmgenid_class_init(ObjectClass *klass, void *data) > > > > > > > > > > > +{ > > > > > > > > > > > + DeviceClass *dc = DEVICE_CLASS(klass); > > > > > > > > > > > + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); > > > > > > > > > > > + > > > > > > > > > > > + set_bit(DEVICE_CATEGORY_MISC, dc->categories); > > > > > > > > > > > + dc->hotpluggable = false; > > > > > > > > > > > + k->realize = vmgenid_realize; > > > > > > > > > > > + k->vendor_id = PCI_VENDOR_ID_REDHAT; > > > > > > > > > > > + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; > > > > > > > > > > > + k->class_id = PCI_CLASS_MEMORY_RAM; > > > > > > > > > > > +} > > > > > > > > > > > + > > > > > > > > > > > +static const TypeInfo vmgenid_device_info = { > > > > > > > > > > > + .name = VMGENID_DEVICE, > > > > > > > > > > > + .parent = TYPE_PCI_DEVICE, > > > > > > > > > > > + .instance_size = sizeof(VmGenIdState), > > > > > > > > > > > + .instance_init = vmgenid_initfn, > > > > > > > > > > > + .class_init = vmgenid_class_init, > > > > > > > > > > > +}; > > > > > > > > > > > + > > > > > > > > > > > +static void vmgenid_register_types(void) > > > > > > > > > > > +{ > > > > > > > > > > > + type_register_static(&vmgenid_device_info); > > > > > > > > > > > +} > > > > > > > > > > > + > > > > > > > > > > > +type_init(vmgenid_register_types) > > > > > > > > > > > diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h > > > > > > > > > > > new file mode 100644 > > > > > > > > > > > index 0000000..b90882c > > > > > > > > > > > --- /dev/null > > > > > > > > > > > +++ b/include/hw/misc/vmgenid.h > > > > > > > > > > > @@ -0,0 +1,27 @@ > > > > > > > > > > > +/* > > > > > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > > > > > + * > > > > > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > > > > > + * > > > > > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > > > > > + * > > > > > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > > > > > + * > > > > > > > > > > > + */ > > > > > > > > > > > + > > > > > > > > > > > +#ifndef HW_MISC_VMGENID_H > > > > > > > > > > > +#define HW_MISC_VMGENID_H > > > > > > > > > > > + > > > > > > > > > > > +#include "qom/object.h" > > > > > > > > > > > + > > > > > > > > > > > +#define VMGENID_DEVICE "vmgenid" > > > > > > > > > > > +#define VMGENID_GUID "guid" > > > > > > > > > > > +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" > > > > > > > > > > > +#define VMGENID_VMGID_BUF_SIZE 0x1000 > > > > > > > > > > > +#define VMGENID_VMGID_BUF_BAR 0 > > > > > > > > > > > + > > > > > > > > > > > +Object *find_vmgneid_dev(Error **errp); > > > > > > > > > > > + > > > > > > > > > > > +#endif > > > > > > > > > > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > > > > > > > > > > > index dedf277..f4c9d48 100644 > > > > > > > > > > > --- a/include/hw/pci/pci.h > > > > > > > > > > > +++ b/include/hw/pci/pci.h > > > > > > > > > > > @@ -94,6 +94,7 @@ > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b > > > > > > > > > > > +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 > > > > > > > > > > > > > > > > > > > > > > #define FMT_PCIBUS PRIx64 > > > > > > > > > > > -- > > > > > > > > > > > 1.8.3.1 > > > > > > > >
On 02/11/2016 06:30 PM, Michael S. Tsirkin wrote: > On Thu, Feb 11, 2016 at 04:16:05PM +0100, Igor Mammedov wrote: >> On Tue, 9 Feb 2016 14:17:44 +0200 >> "Michael S. Tsirkin" <mst@redhat.com> wrote: >> >>> On Tue, Feb 09, 2016 at 11:46:08AM +0100, Igor Mammedov wrote: >>>>> So the linker interface solves this rather neatly: >>>>> bios allocates memory, bios passes memory map to guest. >>>>> Served us well for several years without need for extensions, >>>>> and it does solve the VM GEN ID problem, even though >>>>> 1. it was never designed for huge areas like nvdimm seems to want to use >>>>> 2. we might want to add a new 64 bit flag to avoid touching low memory >>>> linker interface is fine for some readonly data, like ACPI tables >>>> especially fixed tables not so for AML ones is one wants to patch it. >>>> >>>> However now when you want to use it for other purposes you start >>>> adding extensions and other guest->QEMU channels to communicate >>>> patching info back. >>>> It steals guest's memory which is also not nice and doesn't scale well. >>> >>> This is an argument I don't get. memory is memory. call it guest memory >>> or RAM backed PCI BAR - same thing. MMIO is cheaper of course >>> but much slower. >>> >>> ... >> It however matters for user, he pays for guest with XXX RAM but gets less >> than that. And that will be getting worse as a number of such devices >> increases. >> >>>>> OK fine, but returning PCI BAR address to guest is wrong. >>>>> How about reading it from ACPI then? Is it really >>>>> broken unless there's *also* a driver? >>>> I don't get question, MS Spec requires address (ADDR method), >>>> and it's read by ACPI (AML). >>> >>> You were unhappy about DMA into guest memory. >>> As a replacement for DMA, we could have AML read from >>> e.g. PCI and write into RAM. >>> This way we don't need to pass address to QEMU. >> That sounds better as it saves us from allocation of IO port >> and QEMU don't need to write into guest memory, the only question is >> if PCI_Config opregion would work with driver-less PCI device. > > Or PCI BAR for that reason. I don't know for sure. > >> >> And it's still pretty much not test-able since it would require >> fully running OSPM to execute AML side. > > AML is not testable, but that's nothing new. > You can test reading from PCI. > >>> >>>> As for working PCI_Config OpRegion without driver, I haven't tried, >>>> but I wouldn't be surprised if it doesn't, taking in account that >>>> MS introduced _DSM doesn't. >>>> >>>>> >>>>> >>>>>>>> Just compare with a graphics card design, where on device memory >>>>>>>> is mapped directly at some GPA not wasting RAM that guest could >>>>>>>> use for other tasks. >>>>>>> >>>>>>> This might have been true 20 years ago. Most modern cards do DMA. >>>>>> >>>>>> Modern cards, with it's own RAM, map its VRAM in address space directly >>>>>> and allow users use it (GEM API). So they do not waste conventional RAM. >>>>>> For example NVIDIA VRAM is mapped as PCI BARs the same way like in this >>>>>> series (even PCI class id is the same) >>>>> >>>>> Don't know enough about graphics really, I'm not sure how these are >>>>> relevant. NICs and disks certainly do DMA. And virtio gl seems to >>>>> mostly use guest RAM, not on card RAM. >>>>> >>>>>>>> VMGENID and NVDIMM use-cases look to me exactly the same, i.e. >>>>>>>> instead of consuming guest's RAM they should be mapped at >>>>>>>> some GPA and their memory accessed directly. >>>>>>> >>>>>>> VMGENID is tied to a spec that rather arbitrarily asks for a fixed >>>>>>> address. This breaks the straight-forward approach of using a >>>>>>> rebalanceable PCI BAR. >>>>>> >>>>>> For PCI rebalance to work on Windows, one has to provide working PCI driver >>>>>> otherwise OS will ignore it when rebalancing happens and >>>>>> might map something else over ignored BAR. >>>>> >>>>> Does it disable the BAR then? Or just move it elsewhere? >>>> it doesn't, it just blindly ignores BARs existence and maps BAR of >>>> another device with driver over it. >>> >>> Interesting. On classical PCI this is a forbidden configuration. >>> Maybe we do something that confuses windows? >>> Could you tell me how to reproduce this behaviour? >> #cat > t << EOF >> pci_update_mappings_del >> pci_update_mappings_add >> EOF >> >> #./x86_64-softmmu/qemu-system-x86_64 -snapshot -enable-kvm -snapshot \ >> -monitor unix:/tmp/m,server,nowait -device pci-bridge,chassis_nr=1 \ >> -boot menu=on -m 4G -trace events=t ws2012r2x64dc.img \ >> -device ivshmem,id=foo,size=2M,shm,bus=pci.1,addr=01 >> >> wait till OS boots, note BARs programmed for ivshmem >> in my case it was >> 01:01.0 0,0xfe800000+0x100 >> then execute script and watch pci_update_mappings* trace events >> >> # for i in $(seq 3 18); do printf -- "device_add e1000,bus=pci.1,addr=%x\n" $i | nc -U /tmp/m; sleep 5; done; >> >> hotplugging e1000,bus=pci.1,addr=12 triggers rebalancing where >> Windows unmaps all BARs of nics on bridge but doesn't touch ivshmem >> and then programs new BARs, where: >> pci_update_mappings_add d=0x7fa02ff0cf90 01:11.0 0,0xfe800000+0x20000 >> creates overlapping BAR with ivshmem > Hi, Let me see if I understand. You say that in Windows, if a device does not have a driver installed, its BARS ranges can be used after re-balancing by other devices, right? If yes, in Windows we cannot use the device anyway, so we shouldn't care, right? Our only problem remains the overlapping memory regions for the old device and the new one and we need to ensure only the new device will use these region? Thanks, Marcel > > Thanks! > We need to figure this out because currently this does not > work properly (or maybe it works, but merely by chance). > Me and Marcel will play with this. > [...]
On Thu, Feb 11, 2016 at 07:34:52PM +0200, Marcel Apfelbaum wrote: > On 02/11/2016 06:30 PM, Michael S. Tsirkin wrote: > >On Thu, Feb 11, 2016 at 04:16:05PM +0100, Igor Mammedov wrote: > >>On Tue, 9 Feb 2016 14:17:44 +0200 > >>"Michael S. Tsirkin" <mst@redhat.com> wrote: > >> > >>>On Tue, Feb 09, 2016 at 11:46:08AM +0100, Igor Mammedov wrote: > >>>>>So the linker interface solves this rather neatly: > >>>>>bios allocates memory, bios passes memory map to guest. > >>>>>Served us well for several years without need for extensions, > >>>>>and it does solve the VM GEN ID problem, even though > >>>>>1. it was never designed for huge areas like nvdimm seems to want to use > >>>>>2. we might want to add a new 64 bit flag to avoid touching low memory > >>>>linker interface is fine for some readonly data, like ACPI tables > >>>>especially fixed tables not so for AML ones is one wants to patch it. > >>>> > >>>>However now when you want to use it for other purposes you start > >>>>adding extensions and other guest->QEMU channels to communicate > >>>>patching info back. > >>>>It steals guest's memory which is also not nice and doesn't scale well. > >>> > >>>This is an argument I don't get. memory is memory. call it guest memory > >>>or RAM backed PCI BAR - same thing. MMIO is cheaper of course > >>>but much slower. > >>> > >>>... > >>It however matters for user, he pays for guest with XXX RAM but gets less > >>than that. And that will be getting worse as a number of such devices > >>increases. > >> > >>>>>OK fine, but returning PCI BAR address to guest is wrong. > >>>>>How about reading it from ACPI then? Is it really > >>>>>broken unless there's *also* a driver? > >>>>I don't get question, MS Spec requires address (ADDR method), > >>>>and it's read by ACPI (AML). > >>> > >>>You were unhappy about DMA into guest memory. > >>>As a replacement for DMA, we could have AML read from > >>>e.g. PCI and write into RAM. > >>>This way we don't need to pass address to QEMU. > >>That sounds better as it saves us from allocation of IO port > >>and QEMU don't need to write into guest memory, the only question is > >>if PCI_Config opregion would work with driver-less PCI device. > > > >Or PCI BAR for that reason. I don't know for sure. > > > >> > >>And it's still pretty much not test-able since it would require > >>fully running OSPM to execute AML side. > > > >AML is not testable, but that's nothing new. > >You can test reading from PCI. > > > >>> > >>>>As for working PCI_Config OpRegion without driver, I haven't tried, > >>>>but I wouldn't be surprised if it doesn't, taking in account that > >>>>MS introduced _DSM doesn't. > >>>> > >>>>> > >>>>> > >>>>>>>> Just compare with a graphics card design, where on device memory > >>>>>>>> is mapped directly at some GPA not wasting RAM that guest could > >>>>>>>> use for other tasks. > >>>>>>> > >>>>>>>This might have been true 20 years ago. Most modern cards do DMA. > >>>>>> > >>>>>>Modern cards, with it's own RAM, map its VRAM in address space directly > >>>>>>and allow users use it (GEM API). So they do not waste conventional RAM. > >>>>>>For example NVIDIA VRAM is mapped as PCI BARs the same way like in this > >>>>>>series (even PCI class id is the same) > >>>>> > >>>>>Don't know enough about graphics really, I'm not sure how these are > >>>>>relevant. NICs and disks certainly do DMA. And virtio gl seems to > >>>>>mostly use guest RAM, not on card RAM. > >>>>> > >>>>>>>> VMGENID and NVDIMM use-cases look to me exactly the same, i.e. > >>>>>>>> instead of consuming guest's RAM they should be mapped at > >>>>>>>> some GPA and their memory accessed directly. > >>>>>>> > >>>>>>>VMGENID is tied to a spec that rather arbitrarily asks for a fixed > >>>>>>>address. This breaks the straight-forward approach of using a > >>>>>>>rebalanceable PCI BAR. > >>>>>> > >>>>>>For PCI rebalance to work on Windows, one has to provide working PCI driver > >>>>>>otherwise OS will ignore it when rebalancing happens and > >>>>>>might map something else over ignored BAR. > >>>>> > >>>>>Does it disable the BAR then? Or just move it elsewhere? > >>>>it doesn't, it just blindly ignores BARs existence and maps BAR of > >>>>another device with driver over it. > >>> > >>>Interesting. On classical PCI this is a forbidden configuration. > >>>Maybe we do something that confuses windows? > >>>Could you tell me how to reproduce this behaviour? > >>#cat > t << EOF > >>pci_update_mappings_del > >>pci_update_mappings_add > >>EOF > >> > >>#./x86_64-softmmu/qemu-system-x86_64 -snapshot -enable-kvm -snapshot \ > >> -monitor unix:/tmp/m,server,nowait -device pci-bridge,chassis_nr=1 \ > >> -boot menu=on -m 4G -trace events=t ws2012r2x64dc.img \ > >> -device ivshmem,id=foo,size=2M,shm,bus=pci.1,addr=01 > >> > >>wait till OS boots, note BARs programmed for ivshmem > >> in my case it was > >> 01:01.0 0,0xfe800000+0x100 > >>then execute script and watch pci_update_mappings* trace events > >> > >># for i in $(seq 3 18); do printf -- "device_add e1000,bus=pci.1,addr=%x\n" $i | nc -U /tmp/m; sleep 5; done; > >> > >>hotplugging e1000,bus=pci.1,addr=12 triggers rebalancing where > >>Windows unmaps all BARs of nics on bridge but doesn't touch ivshmem > >>and then programs new BARs, where: > >> pci_update_mappings_add d=0x7fa02ff0cf90 01:11.0 0,0xfe800000+0x20000 > >>creates overlapping BAR with ivshmem > > > > Hi, > > Let me see if I understand. > You say that in Windows, if a device does not have a driver installed, > its BARS ranges can be used after re-balancing by other devices, right? > > If yes, in Windows we cannot use the device anyway, so we shouldn't care, right? If e1000 (has driver) overlaps ibshmem (no driver) we have a problem as e1000 won't work, or will, but mostly by luck. > Our only problem remains the overlapping memory regions for > the old device and the new one and we need to ensure only the new device > will use these region? > > > Thanks, > Marcel > > > > >Thanks! > >We need to figure this out because currently this does not > >work properly (or maybe it works, but merely by chance). > >Me and Marcel will play with this. > > > [...]
On Thu, 11 Feb 2016 18:30:19 +0200 "Michael S. Tsirkin" <mst@redhat.com> wrote: > On Thu, Feb 11, 2016 at 04:16:05PM +0100, Igor Mammedov wrote: > > On Tue, 9 Feb 2016 14:17:44 +0200 > > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > > > On Tue, Feb 09, 2016 at 11:46:08AM +0100, Igor Mammedov wrote: > > > > > So the linker interface solves this rather neatly: > > > > > bios allocates memory, bios passes memory map to guest. > > > > > Served us well for several years without need for extensions, > > > > > and it does solve the VM GEN ID problem, even though > > > > > 1. it was never designed for huge areas like nvdimm seems to want to use > > > > > 2. we might want to add a new 64 bit flag to avoid touching low memory > > > > linker interface is fine for some readonly data, like ACPI tables > > > > especially fixed tables not so for AML ones is one wants to patch it. > > > > > > > > However now when you want to use it for other purposes you start > > > > adding extensions and other guest->QEMU channels to communicate > > > > patching info back. > > > > It steals guest's memory which is also not nice and doesn't scale well. > > > > > > This is an argument I don't get. memory is memory. call it guest memory > > > or RAM backed PCI BAR - same thing. MMIO is cheaper of course > > > but much slower. > > > > > > ... > > It however matters for user, he pays for guest with XXX RAM but gets less > > than that. And that will be getting worse as a number of such devices > > increases. > > > > > > > OK fine, but returning PCI BAR address to guest is wrong. > > > > > How about reading it from ACPI then? Is it really > > > > > broken unless there's *also* a driver? > > > > I don't get question, MS Spec requires address (ADDR method), > > > > and it's read by ACPI (AML). > > > > > > You were unhappy about DMA into guest memory. > > > As a replacement for DMA, we could have AML read from > > > e.g. PCI and write into RAM. > > > This way we don't need to pass address to QEMU. > > That sounds better as it saves us from allocation of IO port > > and QEMU don't need to write into guest memory, the only question is > > if PCI_Config opregion would work with driver-less PCI device. > > Or PCI BAR for that reason. I don't know for sure. unfortunately BAR doesn't work for driver-less PCI device, but maybe we can add vendor specific PCI_Confog to always present LPC/ISA bridges and make it do the job like it does it for allocating IO ports for CPU/MEM hotplug now. > > > > > And it's still pretty much not test-able since it would require > > fully running OSPM to execute AML side. > > AML is not testable, but that's nothing new. > You can test reading from PCI. > > > > > > > > As for working PCI_Config OpRegion without driver, I haven't tried, > > > > but I wouldn't be surprised if it doesn't, taking in account that > > > > MS introduced _DSM doesn't. > > > > > > > > > > > > > > > > > > > > > > Just compare with a graphics card design, where on device memory > > > > > > > > is mapped directly at some GPA not wasting RAM that guest could > > > > > > > > use for other tasks. > > > > > > > > > > > > > > This might have been true 20 years ago. Most modern cards do DMA. > > > > > > > > > > > > Modern cards, with it's own RAM, map its VRAM in address space directly > > > > > > and allow users use it (GEM API). So they do not waste conventional RAM. > > > > > > For example NVIDIA VRAM is mapped as PCI BARs the same way like in this > > > > > > series (even PCI class id is the same) > > > > > > > > > > Don't know enough about graphics really, I'm not sure how these are > > > > > relevant. NICs and disks certainly do DMA. And virtio gl seems to > > > > > mostly use guest RAM, not on card RAM. > > > > > > > > > > > > > VMGENID and NVDIMM use-cases look to me exactly the same, i.e. > > > > > > > > instead of consuming guest's RAM they should be mapped at > > > > > > > > some GPA and their memory accessed directly. > > > > > > > > > > > > > > VMGENID is tied to a spec that rather arbitrarily asks for a fixed > > > > > > > address. This breaks the straight-forward approach of using a > > > > > > > rebalanceable PCI BAR. > > > > > > > > > > > > For PCI rebalance to work on Windows, one has to provide working PCI driver > > > > > > otherwise OS will ignore it when rebalancing happens and > > > > > > might map something else over ignored BAR. > > > > > > > > > > Does it disable the BAR then? Or just move it elsewhere? > > > > it doesn't, it just blindly ignores BARs existence and maps BAR of > > > > another device with driver over it. > > > > > > Interesting. On classical PCI this is a forbidden configuration. > > > Maybe we do something that confuses windows? > > > Could you tell me how to reproduce this behaviour? > > #cat > t << EOF > > pci_update_mappings_del > > pci_update_mappings_add > > EOF > > > > #./x86_64-softmmu/qemu-system-x86_64 -snapshot -enable-kvm -snapshot \ > > -monitor unix:/tmp/m,server,nowait -device pci-bridge,chassis_nr=1 \ > > -boot menu=on -m 4G -trace events=t ws2012r2x64dc.img \ > > -device ivshmem,id=foo,size=2M,shm,bus=pci.1,addr=01 > > > > wait till OS boots, note BARs programmed for ivshmem > > in my case it was > > 01:01.0 0,0xfe800000+0x100 > > then execute script and watch pci_update_mappings* trace events > > > > # for i in $(seq 3 18); do printf -- "device_add e1000,bus=pci.1,addr=%x\n" $i | nc -U /tmp/m; sleep 5; done; > > > > hotplugging e1000,bus=pci.1,addr=12 triggers rebalancing where > > Windows unmaps all BARs of nics on bridge but doesn't touch ivshmem > > and then programs new BARs, where: > > pci_update_mappings_add d=0x7fa02ff0cf90 01:11.0 0,0xfe800000+0x20000 > > creates overlapping BAR with ivshmem > > > Thanks! > We need to figure this out because currently this does not > work properly (or maybe it works, but merely by chance). > Me and Marcel will play with this. > > > > > > > > > > > > > > > > > > > > > > > > > > In that case NVDIMM could even map whole label area and > > > > > > > > significantly simplify QEMU<->OSPM protocol that currently > > > > > > > > serializes that data through a 4K page. > > > > > > > > There is also performance issue with buffer allocated in RAM, > > > > > > > > because DMA adds unnecessary copying step when data could > > > > > > > > be read/written directly of NVDIMM. > > > > > > > > It might be no very important for _DSM interface but when it > > > > > > > > comes to supporting block mode it can become an issue. > > > > > > > > > > > > > > So for NVDIMM, presumably it will have code access PCI BAR properly, so > > > > > > > it's guaranteed to work across BAR rebalancing. > > > > > > > Would that address the performance issue? > > > > > > > > > > > > it would if rebalancing were to account for driverless PCI device BARs, > > > > > > but it doesn't hence such BARs need to be statically pinned > > > > > > at place where BIOS put them at start up. > > > > > > I'm also not sure that PCIConfig operation region would work > > > > > > on Windows without loaded driver (similar to _DSM case). > > > > > > > > > > > > > > > > > > > > Above points make ACPI patching approach not robust and fragile > > > > > > > > and hard to maintain. > > > > > > > > > > > > > > Wrt GEN ID these are all kind of subjective though. I especially don't > > > > > > > get what appears your general dislike of the linker host/guest > > > > > > > interface. > > > > > > Besides technical issues general dislike is just what I've written > > > > > > "not robust and fragile" bios_linker_loader_add_pointer() interface. > > > > > > > > > > > > to make it less fragile: > > > > > > 1. it should be impossible to corrupt memory or patch wrong address. > > > > > > current impl. silently relies on value referenced by 'pointer' argument > > > > > > and to figure that out one has to read linker code on BIOS side. > > > > > > That could be easily set wrong and slip through review. > > > > > > > > > > That's an API issue, it seemed like a good idea but I guess > > > > > it confuses people. Would you be happier using an offset > > > > > instead of a pointer? > > > > offset is better and it would be better if it were saying > > > > which offset it is (i.e. relative to what) > > > > > > > > > Start of table, right? > > not sure, to me it looks like start of a blob and not the table > > Right that's what I meant. > > > > > > > > > > > > > > > API shouldn't rely on the caller setting value pointed by that argument. > > > > > > > > > > I couldn't parse that one. Care suggesting a cleaner API for linker? > > > > here is current API signature: > > > > > > > > bios_linker_loader_add_pointer(GArray *linker, > > > > const char *dest_file, > > > > const char *src_file, > > > > GArray *table, void *pointer, > > > > uint8_t pointer_size) > > > > > > > > issue 1: > > > > where 'pointer' is a real pointer pointing inside 'table' and API > > > > calculates offset underhood: > > > > offset = (gchar *)pointer - table->data; > > > > and puts it in ADD_POINTER command. > > > > > > > > it's easy to get wrong offset if 'pointer' is not from 'table'. > > > > > > OK, replace that with table_offset? > > blob_offset? > > > > also s/table/blob/ > > > OK. > > > > > > > > issue 2: > > > > 'pointer' points to another offset of size 'pointer_size' in 'table' > > > > blob, that means that whoever composes blob, has to aware of > > > > it and fill correct value there which is possible to do right > > > > if one looks inside of SeaBIOS part of linker interface. > > > > Which is easy to forget and then one has to deal with mess > > > > caused by random memory corruption. > > > > > > > > bios_linker_loader_add_pointer() and corresponding > > > > ADD_POINTER command should take this second offset as argument > > > > and do no require 'table' be pre-filled with it or > > > > in worst case if of extending ADD_POINTER command is problematic > > > > bios_linker_loader_add_pointer() should still take > > > > the second offset and patch 'table' itself so that 'table' composer > > > > don't have to worry about it. > > > > > > This one I don't understand. What's the second pointer you > > > are talking about? > > ha, see even the author already has absolutely no clue how linker works > > and about what offsets are relative to. > > see SeaBIOS romfile_loader_add_pointer(): > > ... > > memcpy(&pointer, dest_file->data + offset, entry->pointer_size); > > here is the second offset ^^^^^^^^^^^^^ > > It's the same offset in the entry. > struct { > char pointer_dest_file[ROMFILE_LOADER_FILESZ]; > char pointer_src_file[ROMFILE_LOADER_FILESZ]; > u32 pointer_offset; > u8 pointer_size; > }; pointer_offset == offset from above but question is what is result of memcpy() and how it's used below vvvv > > pointer = le64_to_cpu(pointer); > > pointer += (unsigned long)src_file->data; as you see *(foo_size *)(dest_file->data + offset) is the 2nd offset relative to the beginning of src_file and current API requires dst_file blob to contain valid value there of pointer_size. i.e. author of AML have to prefill 2nd offset before passing blob to bios_linker_loader_add_pointer() which is rather fragile. If it's difficult to make ADD_POINTER command pass that offset as part of the command then it would be better to extend bios_linker_loader_add_pointer() to take src_offset and write it into blob instead of asking for AML author to do it manually every time. And may by add bios_linker_loader_add_aml_pointer() which would be able to check if it patches AML correctly while old bios_linker_loader_add_pointer() wouldn't do check and work with raw tables. > > pointer = cpu_to_le64(pointer); > > memcpy(dest_file->data + offset, &pointer, entry->pointer_size); > > > > all this src|dst_file and confusing offsets(whatever they might mean) > > make me experience headache every time I need to remember how linker > > works and read both QEMU and SeaBIOS code to figure it out each time. > > That's what I'd call un-maintainable and hard to use API. > > Tight, there's lack of documentation. It's my fault, so let's fix it. > It's an API issue, nothing to do with ABI. > > > > > > > > > > > issue 3: > > > > all patching obviously needs bounds checking on QEMU side > > > > so it would abort early if it could corrupt memory. > > > > > > That's easy. > > > > > > > > > > > > > > 2. If it's going to be used for patching AML, it should assert > > > > > > when bios_linker_loader_add_pointer() is called if to be patched > > > > > > AML object is wrong and patching would corrupt AML blob. > > > > > > > > > > Hmm for example check that the patched data has > > > > > the expected pattern? > > > > yep, nothing could be done for raw tables but that should be possible > > > > for AML tables and if pattern is unsupported/size doesn't match > > > > it should abort QEMU early instead of corrupting table. > > > > > > Above all sounds reasonable. Would you like to take a stub > > > at it or prefer me to? > > It would be better if it were you. > > > > I wouldn't like to maintain it ever as it's too complex and hard to use API, > > which I'd use only as the last resort if there weren't any other way > > to implement the task at hand. > > Sorry, I don't get it. You don't like the API, write a better one > for an existing ABI. If you prefer waiting for me to fix it, > that's fine too but no guarantees that you will like the new one > or when it will happen. > > Look there have been 1 change (a bigfix for alignment) in several years > since we added the linker. We don't maintain any compatiblity flags > around it *at all*. It might have a hard to use API but that is the > definition of easy to maintain. You are pushing allocating memory host > side as an alternative, what happened there is the reverse. A ton of > changes and pain all the way, and we get to maintain a bag of compat > hacks for old machine types. You say we finally know what we are > doing and won't have to change it any more. I'm not convinced. You are talking (lots of compat issues) about manually mapped initial memory map, while I'm talking about allocation in hotplug memory region. It also had only one compat 'improvement' for alignment but otherwise it wasn't a source of any problems. > > > > > > > > > > > > > > > > > > It's there and we are not moving away from it, so why not > > > > > > > use it in more places? Or if you think it's wrong, why don't you build > > > > > > > something better then? We could then maybe use it for these things as > > > > > > > well. > > > > > > > > > > > > Yep, I think for vmgenid and even more so for nvdimm > > > > > > it would be better to allocate GPAs in QEMU and map backing > > > > > > MemoryRegions directly in QEMU. > > > > > > For nvdimm (main data region) > > > > > > we already do it using pc-dimm's GPA allocation algorithm, we also > > > > > > could use similar approach for nvdimm's label area and vmgenid. > > > > > > > > > > > > Here is a simple attempt to add a limited GPA allocator in high memory > > > > > > https://patchwork.ozlabs.org/patch/540852/ > > > > > > But it haven't got any comment from you and were ignored. > > > > > > Lets consider it and perhaps we could come up with GPA allocator > > > > > > that could be used for other things as well. > > > > > > > > > > For nvdimm label area, I agree passing things through > > > > > a 4K buffer seems inefficient. > > > > > > > > > > I'm not sure what's a better way though. > > > > > > > > > > Use 64 bit memory? Setting aside old guests such as XP, > > > > > does it break 32 bit guests? > > > > it might not work with 32bit guests, the same way as mem hotplug > > > > doesn't work for them unless they are PAE enabled. > > > > > > Right, I mean with PAE. > > I've tested it with 32-bit XP and Windows 10, they boot fine and > > vmgenid device is displayed as OK with buffer above 4Gb (on Win10). > > So at least is doesn't crash guest. > > I can't test more than that for 32 bit guests since utility > > to read vmgenid works only on Windows Server and there isn't > > a 32bit version of it. > > > > > > > > > but well that's a limitation of implementation and considering > > > > that storage nvdimm area is mapped at 64bit GPA it doesn't matter. > > > > > > > > > I'm really afraid of adding yet another allocator, I think you > > > > > underestimate the maintainance headache: it's not theoretical and is > > > > > already felt. > > > > Current maintenance headache is due to fixed handpicked > > > > mem layout, we can't do much with it for legacy machine > > > > types but with QEMU side GPA allocator we can try to switch > > > > to a flexible memory layout that would allocate GPA > > > > depending on QEMU config in a stable manner. > > > > > > So far, we didn't manage to. It seems to go in the reverse > > > direction were we add more and more control to let management > > > influence the layout. Things like alignment requirements > > > also tend to surface later and wreck havoc on whatever > > > we do. > > Was even there an attempt to try it before, could you point to it? > > Look at the mess we have with the existing allocator. > As a way to fix this unmaintainable mess, what I see is suggestions > to drop old machine types so people have to reinstall guests. > This does not inspire confidence. > > > The only attempt I've seen was https://patchwork.ozlabs.org/patch/540852/ > > but it haven't got any technical comments from you, > > except of 'I'm afraid that it won't work' on IRC. > > > > QEMU already has GPA allocator limited to memory hotplug AS > > and it has passed through 'growing' issues. What above patch > > proposes is to reuse already existing memory hotplug AS and > > maybe make its GPA allocator more generic (i.e. not tied > > only to pc-dimm) on top of it. > > You say we finally know what we are doing and won't have to change it > any more. I'm not convinced. > > > > > It's sufficient for vmgenid use-case and a definitely > > much more suitable for nvdimm which already uses it for mapping > > main storage MemoryRegion. > > > > > > well there is maintenance headache with bios_linker as well > > > > due to its complexity (multiple layers of indirection) and > > > > it will grow when more places try to use it. > > > > Yep we could use it as a hack, stealing RAM and trying implement > > > > backwards DMA or we could be less afraid and consider > > > > yet another allocator which will do the job without hacks > > > > which should benefit QEMU in a long run (it might be not easy > > > > to impl. it right but if we won't even try we would be buried > > > > in complex hacks that 'work' for now) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And hey, if you want to use a pci device to pass the physical > > > > > > > > > > > address guest to host, instead of reserving > > > > > > > > > > > a couple of IO addresses, sure, stick it in pci config in > > > > > > > > > > > a vendor-specific capability, this way it'll get migrated > > > > > > > > > > > automatically. > > > > > > > > > > Could you elaborate more on this suggestion? > > > > > > > > > > > > > > > > > > I really just mean using PCI_Config operation region. > > > > > > > > > If you wish, I'll try to post a prototype next week. > > > > > > > > I don't know much about PCI but it would be interesting, > > > > > > > > perhaps we could use it somewhere else. > > > > > > > > > > > > > > > > However it should be checked if it works with Windows, > > > > > > > > for example PCI specific _DSM method is ignored by it > > > > > > > > if PCI device doesn't have working PCI driver bound to it. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --- > > > > > > > > > > > > changes since 17: > > > > > > > > > > > > - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > > > - make BAR prefetchable to make region cached as per MS spec > > > > > > > > > > > > - s/uuid/guid/ to match spec > > > > > > > > > > > > changes since 14: > > > > > > > > > > > > - reserve BAR resources so that Windows won't touch it > > > > > > > > > > > > during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > > > - ACPI: split VGEN device of PCI device descriptor > > > > > > > > > > > > and place it at PCI0 scope, so that won't be need trace its > > > > > > > > > > > > location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > > > - permit only one vmgenid to be created > > > > > > > > > > > > - enable BAR be mapped above 4Gb if it can't be mapped at low mem > > > > > > > > > > > > --- > > > > > > > > > > > > default-configs/i386-softmmu.mak | 1 + > > > > > > > > > > > > default-configs/x86_64-softmmu.mak | 1 + > > > > > > > > > > > > docs/specs/pci-ids.txt | 1 + > > > > > > > > > > > > hw/i386/acpi-build.c | 56 +++++++++++++- > > > > > > > > > > > > hw/misc/Makefile.objs | 1 + > > > > > > > > > > > > hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ > > > > > > > > > > > > include/hw/misc/vmgenid.h | 27 +++++++ > > > > > > > > > > > > include/hw/pci/pci.h | 1 + > > > > > > > > > > > > 8 files changed, 240 insertions(+), 2 deletions(-) > > > > > > > > > > > > create mode 100644 hw/misc/vmgenid.c > > > > > > > > > > > > create mode 100644 include/hw/misc/vmgenid.h > > > > > > > > > > > > > > > > > > > > > > > > diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak > > > > > > > > > > > > index b177e52..6402439 100644 > > > > > > > > > > > > --- a/default-configs/i386-softmmu.mak > > > > > > > > > > > > +++ b/default-configs/i386-softmmu.mak > > > > > > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > > > > > > CONFIG_IOAPIC=y > > > > > > > > > > > > CONFIG_PVPANIC=y > > > > > > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > > > > > > +CONFIG_VMGENID=y > > > > > > > > > > > > CONFIG_NVDIMM=y > > > > > > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > > > > > > CONFIG_XIO3130=y > > > > > > > > > > > > diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak > > > > > > > > > > > > index 6e3b312..fdac18f 100644 > > > > > > > > > > > > --- a/default-configs/x86_64-softmmu.mak > > > > > > > > > > > > +++ b/default-configs/x86_64-softmmu.mak > > > > > > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > > > > > > CONFIG_IOAPIC=y > > > > > > > > > > > > CONFIG_PVPANIC=y > > > > > > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > > > > > > +CONFIG_VMGENID=y > > > > > > > > > > > > CONFIG_NVDIMM=y > > > > > > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > > > > > > CONFIG_XIO3130=y > > > > > > > > > > > > diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt > > > > > > > > > > > > index 0adcb89..e65ecf9 100644 > > > > > > > > > > > > --- a/docs/specs/pci-ids.txt > > > > > > > > > > > > +++ b/docs/specs/pci-ids.txt > > > > > > > > > > > > @@ -47,6 +47,7 @@ PCI devices (other than virtio): > > > > > > > > > > > > 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) > > > > > > > > > > > > 1b36:0006 PCI Rocker Ethernet switch device > > > > > > > > > > > > 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) > > > > > > > > > > > > +1b36:0009 PCI VM-Generation device > > > > > > > > > > > > 1b36:000a PCI-PCI bridge (multiseat) > > > > > > > > > > > > > > > > > > > > > > > > All these devices are documented in docs/specs. > > > > > > > > > > > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c > > > > > > > > > > > > index 78758e2..0187262 100644 > > > > > > > > > > > > --- a/hw/i386/acpi-build.c > > > > > > > > > > > > +++ b/hw/i386/acpi-build.c > > > > > > > > > > > > @@ -44,6 +44,7 @@ > > > > > > > > > > > > #include "hw/acpi/tpm.h" > > > > > > > > > > > > #include "sysemu/tpm_backend.h" > > > > > > > > > > > > #include "hw/timer/mc146818rtc_regs.h" > > > > > > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > > > > > > > > > > > > > > > /* Supported chipsets: */ > > > > > > > > > > > > #include "hw/acpi/piix4.h" > > > > > > > > > > > > @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) > > > > > > > > > > > > info->applesmc_io_base = applesmc_port(); > > > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > > > > > +static Aml *build_vmgenid_device(uint64_t buf_paddr) > > > > > > > > > > > > +{ > > > > > > > > > > > > + Aml *dev, *pkg, *crs; > > > > > > > > > > > > + > > > > > > > > > > > > + dev = aml_device("VGEN"); > > > > > > > > > > > > + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); > > > > > > > > > > > > + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); > > > > > > > > > > > > + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); > > > > > > > > > > > > + > > > > > > > > > > > > + pkg = aml_package(2); > > > > > > > > > > > > + /* low 32 bits of UUID buffer addr */ > > > > > > > > > > > > + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); > > > > > > > > > > > > + /* high 32 bits of UUID buffer addr */ > > > > > > > > > > > > + aml_append(pkg, aml_int(buf_paddr >> 32)); > > > > > > > > > > > > + aml_append(dev, aml_name_decl("ADDR", pkg)); > > > > > > > > > > > > + > > > > > > > > > > > > + /* > > > > > > > > > > > > + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows > > > > > > > > > > > > + * displays it as "PCI RAM controller" which is marked as NO_DRV > > > > > > > > > > > > + * so Windows ignores VMGEN device completely and doesn't check > > > > > > > > > > > > + * for resource conflicts which during PCI rebalancing can lead > > > > > > > > > > > > + * to another PCI device claiming ignored BARs. To prevent this > > > > > > > > > > > > + * statically reserve resources used by VM_Gen_Counter. > > > > > > > > > > > > + * For more verbose comment see this commit message. > > > > > > > > > > > > > > > > > > > > > > What does "this commit message" mean? > > > > > > > > > > above commit message. Should I reword it to just 'see commit message' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + */ > > > > > > > > > > > > + crs = aml_resource_template(); > > > > > > > > > > > > + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, > > > > > > > > > > > > + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, > > > > > > > > > > > > + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, > > > > > > > > > > > > + VMGENID_VMGID_BUF_SIZE)); > > > > > > > > > > > > + aml_append(dev, aml_name_decl("_CRS", crs)); > > > > > > > > > > > > + return dev; > > > > > > > > > > > > +} > > > > > > > > > > > > + > > > > > > > > > > > > /* > > > > > > > > > > > > * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. > > > > > > > > > > > > * On i386 arch we only have two pci hosts, so we can look only for them. > > > > > > > > > > > > @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > > > > > if (bus) { > > > > > > > > > > > > + Object *vmgen; > > > > > > > > > > > > Aml *scope = aml_scope("PCI0"); > > > > > > > > > > > > /* Scan all PCI buses. Generate tables to support hotplug. */ > > > > > > > > > > > > build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); > > > > > > > > > > > > @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > > > > > > aml_append(scope, dev); > > > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > > > > > + vmgen = find_vmgneid_dev(NULL); > > > > > > > > > > > > + if (vmgen) { > > > > > > > > > > > > + PCIDevice *pdev = PCI_DEVICE(vmgen); > > > > > > > > > > > > + uint64_t buf_paddr = > > > > > > > > > > > > + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); > > > > > > > > > > > > + > > > > > > > > > > > > + if (buf_paddr != PCI_BAR_UNMAPPED) { > > > > > > > > > > > > + aml_append(scope, build_vmgenid_device(buf_paddr)); > > > > > > > > > > > > + > > > > > > > > > > > > + method = aml_method("\\_GPE._E00", 0, > > > > > > > > > > > > + AML_NOTSERIALIZED); > > > > > > > > > > > > + aml_append(method, > > > > > > > > > > > > + aml_notify(aml_name("\\_SB.PCI0.VGEN"), > > > > > > > > > > > > + aml_int(0x80))); > > > > > > > > > > > > + aml_append(ssdt, method); > > > > > > > > > > > > + } > > > > > > > > > > > > + } > > > > > > > > > > > > + > > > > > > > > > > > > aml_append(sb_scope, scope); > > > > > > > > > > > > } > > > > > > > > > > > > } > > > > > > > > > > > > @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, > > > > > > > > > > > > { > > > > > > > > > > > > aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); > > > > > > > > > > > > > > > > > > > > > > > > - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); > > > > > > > > > > > > - > > > > > > > > > > > > if (misc->is_piix4) { > > > > > > > > > > > > method = aml_method("_E01", 0, AML_NOTSERIALIZED); > > > > > > > > > > > > aml_append(method, > > > > > > > > > > > > diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs > > > > > > > > > > > > index d4765c2..1f05edd 100644 > > > > > > > > > > > > --- a/hw/misc/Makefile.objs > > > > > > > > > > > > +++ b/hw/misc/Makefile.objs > > > > > > > > > > > > @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o > > > > > > > > > > > > > > > > > > > > > > > > obj-$(CONFIG_PVPANIC) += pvpanic.o > > > > > > > > > > > > obj-$(CONFIG_EDU) += edu.o > > > > > > > > > > > > +obj-$(CONFIG_VMGENID) += vmgenid.o > > > > > > > > > > > > obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o > > > > > > > > > > > > diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c > > > > > > > > > > > > new file mode 100644 > > > > > > > > > > > > index 0000000..a2fbdfc > > > > > > > > > > > > --- /dev/null > > > > > > > > > > > > +++ b/hw/misc/vmgenid.c > > > > > > > > > > > > @@ -0,0 +1,154 @@ > > > > > > > > > > > > +/* > > > > > > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > > > > > > + * > > > > > > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > > > > > > + * > > > > > > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > > > > > > + * > > > > > > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > > > > > > + * > > > > > > > > > > > > + */ > > > > > > > > > > > > + > > > > > > > > > > > > +#include "hw/i386/pc.h" > > > > > > > > > > > > +#include "hw/pci/pci.h" > > > > > > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > > > +#include "hw/acpi/acpi.h" > > > > > > > > > > > > +#include "qapi/visitor.h" > > > > > > > > > > > > + > > > > > > > > > > > > +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) > > > > > > > > > > > > + > > > > > > > > > > > > +typedef struct VmGenIdState { > > > > > > > > > > > > + PCIDevice parent_obj; > > > > > > > > > > > > + MemoryRegion iomem; > > > > > > > > > > > > + union { > > > > > > > > > > > > + uint8_t guid[16]; > > > > > > > > > > > > + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; > > > > > > > > > > > > + }; > > > > > > > > > > > > + bool guid_set; > > > > > > > > > > > > +} VmGenIdState; > > > > > > > > > > > > + > > > > > > > > > > > > +Object *find_vmgneid_dev(Error **errp) > > > > > > > > > > > > +{ > > > > > > > > > > > > + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); > > > > > > > > > > > > + if (!obj) { > > > > > > > > > > > > + error_setg(errp, VMGENID_DEVICE " is not found"); > > > > > > > > > > > > + } > > > > > > > > > > > > + return obj; > > > > > > > > > > > > +} > > > > > > > > > > > > + > > > > > > > > > > > > +static void vmgenid_update_guest(VmGenIdState *s) > > > > > > > > > > > > +{ > > > > > > > > > > > > + Object *acpi_obj; > > > > > > > > > > > > + void *ptr = memory_region_get_ram_ptr(&s->iomem); > > > > > > > > > > > > + > > > > > > > > > > > > + memcpy(ptr, &s->guid, sizeof(s->guid)); > > > > > > > > > > > > + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); > > > > > > > > > > > > + > > > > > > > > > > > > + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); > > > > > > > > > > > > + if (acpi_obj) { > > > > > > > > > > > > + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); > > > > > > > > > > > > + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); > > > > > > > > > > > > + ACPIREGS *acpi_regs = adevc->regs(adev); > > > > > > > > > > > > + > > > > > > > > > > > > + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ > > > > > > > > > > > > + acpi_update_sci(acpi_regs, adevc->sci(adev)); > > > > > > > > > > > > + } > > > > > > > > > > > > +} > > > > > > > > > > > > + > > > > > > > > > > > > +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) > > > > > > > > > > > > +{ > > > > > > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > > > > > > + > > > > > > > > > > > > + if (qemu_uuid_parse(value, s->guid) < 0) { > > > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID > > > > > > > > > > > > + "': Failed to parse GUID string: %s", > > > > > > > > > > > > + object_get_typename(OBJECT(s)), > > > > > > > > > > > > + value); > > > > > > > > > > > > + return; > > > > > > > > > > > > + } > > > > > > > > > > > > + > > > > > > > > > > > > + s->guid_set = true; > > > > > > > > > > > > + vmgenid_update_guest(s); > > > > > > > > > > > > +} > > > > > > > > > > > > + > > > > > > > > > > > > +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, > > > > > > > > > > > > + const char *name, Error **errp) > > > > > > > > > > > > +{ > > > > > > > > > > > > + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); > > > > > > > > > > > > + > > > > > > > > > > > > + if (value == PCI_BAR_UNMAPPED) { > > > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", > > > > > > > > > > > > + object_get_typename(OBJECT(obj))); > > > > > > > > > > > > + return; > > > > > > > > > > > > + } > > > > > > > > > > > > + visit_type_int(v, &value, name, errp); > > > > > > > > > > > > +} > > > > > > > > > > > > + > > > > > > > > > > > > +static void vmgenid_initfn(Object *obj) > > > > > > > > > > > > +{ > > > > > > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > > > > > > + > > > > > > > > > > > > + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), > > > > > > > > > > > > + &error_abort); > > > > > > > > > > > > + > > > > > > > > > > > > + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); > > > > > > > > > > > > + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", > > > > > > > > > > > > + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); > > > > > > > > > > > > +} > > > > > > > > > > > > + > > > > > > > > > > > > + > > > > > > > > > > > > +static void vmgenid_realize(PCIDevice *dev, Error **errp) > > > > > > > > > > > > +{ > > > > > > > > > > > > + VmGenIdState *s = VMGENID(dev); > > > > > > > > > > > > + bool ambiguous = false; > > > > > > > > > > > > + > > > > > > > > > > > > + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); > > > > > > > > > > > > + if (ambiguous) { > > > > > > > > > > > > + error_setg(errp, "no more than one " VMGENID_DEVICE > > > > > > > > > > > > + " device is permitted"); > > > > > > > > > > > > + return; > > > > > > > > > > > > + } > > > > > > > > > > > > + > > > > > > > > > > > > + if (!s->guid_set) { > > > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", > > > > > > > > > > > > + object_get_typename(OBJECT(s))); > > > > > > > > > > > > + return; > > > > > > > > > > > > + } > > > > > > > > > > > > + > > > > > > > > > > > > + vmstate_register_ram(&s->iomem, DEVICE(s)); > > > > > > > > > > > > + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, > > > > > > > > > > > > + PCI_BASE_ADDRESS_MEM_PREFETCH | > > > > > > > > > > > > + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > > > > > > > > > > > > + &s->iomem); > > > > > > > > > > > > + return; > > > > > > > > > > > > +} > > > > > > > > > > > > + > > > > > > > > > > > > +static void vmgenid_class_init(ObjectClass *klass, void *data) > > > > > > > > > > > > +{ > > > > > > > > > > > > + DeviceClass *dc = DEVICE_CLASS(klass); > > > > > > > > > > > > + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); > > > > > > > > > > > > + > > > > > > > > > > > > + set_bit(DEVICE_CATEGORY_MISC, dc->categories); > > > > > > > > > > > > + dc->hotpluggable = false; > > > > > > > > > > > > + k->realize = vmgenid_realize; > > > > > > > > > > > > + k->vendor_id = PCI_VENDOR_ID_REDHAT; > > > > > > > > > > > > + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; > > > > > > > > > > > > + k->class_id = PCI_CLASS_MEMORY_RAM; > > > > > > > > > > > > +} > > > > > > > > > > > > + > > > > > > > > > > > > +static const TypeInfo vmgenid_device_info = { > > > > > > > > > > > > + .name = VMGENID_DEVICE, > > > > > > > > > > > > + .parent = TYPE_PCI_DEVICE, > > > > > > > > > > > > + .instance_size = sizeof(VmGenIdState), > > > > > > > > > > > > + .instance_init = vmgenid_initfn, > > > > > > > > > > > > + .class_init = vmgenid_class_init, > > > > > > > > > > > > +}; > > > > > > > > > > > > + > > > > > > > > > > > > +static void vmgenid_register_types(void) > > > > > > > > > > > > +{ > > > > > > > > > > > > + type_register_static(&vmgenid_device_info); > > > > > > > > > > > > +} > > > > > > > > > > > > + > > > > > > > > > > > > +type_init(vmgenid_register_types) > > > > > > > > > > > > diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h > > > > > > > > > > > > new file mode 100644 > > > > > > > > > > > > index 0000000..b90882c > > > > > > > > > > > > --- /dev/null > > > > > > > > > > > > +++ b/include/hw/misc/vmgenid.h > > > > > > > > > > > > @@ -0,0 +1,27 @@ > > > > > > > > > > > > +/* > > > > > > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > > > > > > + * > > > > > > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > > > > > > + * > > > > > > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > > > > > > + * > > > > > > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > > > > > > + * > > > > > > > > > > > > + */ > > > > > > > > > > > > + > > > > > > > > > > > > +#ifndef HW_MISC_VMGENID_H > > > > > > > > > > > > +#define HW_MISC_VMGENID_H > > > > > > > > > > > > + > > > > > > > > > > > > +#include "qom/object.h" > > > > > > > > > > > > + > > > > > > > > > > > > +#define VMGENID_DEVICE "vmgenid" > > > > > > > > > > > > +#define VMGENID_GUID "guid" > > > > > > > > > > > > +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" > > > > > > > > > > > > +#define VMGENID_VMGID_BUF_SIZE 0x1000 > > > > > > > > > > > > +#define VMGENID_VMGID_BUF_BAR 0 > > > > > > > > > > > > + > > > > > > > > > > > > +Object *find_vmgneid_dev(Error **errp); > > > > > > > > > > > > + > > > > > > > > > > > > +#endif > > > > > > > > > > > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > > > > > > > > > > > > index dedf277..f4c9d48 100644 > > > > > > > > > > > > --- a/include/hw/pci/pci.h > > > > > > > > > > > > +++ b/include/hw/pci/pci.h > > > > > > > > > > > > @@ -94,6 +94,7 @@ > > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 > > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a > > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b > > > > > > > > > > > > +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c > > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 > > > > > > > > > > > > > > > > > > > > > > > > #define FMT_PCIBUS PRIx64 > > > > > > > > > > > > -- > > > > > > > > > > > > 1.8.3.1 > > > > > > > > > >
On Mon, Feb 15, 2016 at 11:30:24AM +0100, Igor Mammedov wrote: > On Thu, 11 Feb 2016 18:30:19 +0200 > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > On Thu, Feb 11, 2016 at 04:16:05PM +0100, Igor Mammedov wrote: > > > On Tue, 9 Feb 2016 14:17:44 +0200 > > > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > > > > > On Tue, Feb 09, 2016 at 11:46:08AM +0100, Igor Mammedov wrote: > > > > > > So the linker interface solves this rather neatly: > > > > > > bios allocates memory, bios passes memory map to guest. > > > > > > Served us well for several years without need for extensions, > > > > > > and it does solve the VM GEN ID problem, even though > > > > > > 1. it was never designed for huge areas like nvdimm seems to want to use > > > > > > 2. we might want to add a new 64 bit flag to avoid touching low memory > > > > > linker interface is fine for some readonly data, like ACPI tables > > > > > especially fixed tables not so for AML ones is one wants to patch it. > > > > > > > > > > However now when you want to use it for other purposes you start > > > > > adding extensions and other guest->QEMU channels to communicate > > > > > patching info back. > > > > > It steals guest's memory which is also not nice and doesn't scale well. > > > > > > > > This is an argument I don't get. memory is memory. call it guest memory > > > > or RAM backed PCI BAR - same thing. MMIO is cheaper of course > > > > but much slower. > > > > > > > > ... > > > It however matters for user, he pays for guest with XXX RAM but gets less > > > than that. And that will be getting worse as a number of such devices > > > increases. > > > > > > > > > OK fine, but returning PCI BAR address to guest is wrong. > > > > > > How about reading it from ACPI then? Is it really > > > > > > broken unless there's *also* a driver? > > > > > I don't get question, MS Spec requires address (ADDR method), > > > > > and it's read by ACPI (AML). > > > > > > > > You were unhappy about DMA into guest memory. > > > > As a replacement for DMA, we could have AML read from > > > > e.g. PCI and write into RAM. > > > > This way we don't need to pass address to QEMU. > > > That sounds better as it saves us from allocation of IO port > > > and QEMU don't need to write into guest memory, the only question is > > > if PCI_Config opregion would work with driver-less PCI device. > > > > Or PCI BAR for that reason. I don't know for sure. > unfortunately BAR doesn't work for driver-less PCI device, > but maybe we can add vendor specific PCI_Confog to always present > LPC/ISA bridges and make it do the job like it does it for allocating > IO ports for CPU/MEM hotplug now. > > > > > > > > > And it's still pretty much not test-able since it would require > > > fully running OSPM to execute AML side. > > > > AML is not testable, but that's nothing new. > > You can test reading from PCI. > > > > > > > > > > > As for working PCI_Config OpRegion without driver, I haven't tried, > > > > > but I wouldn't be surprised if it doesn't, taking in account that > > > > > MS introduced _DSM doesn't. > > > > > > > > > > > > > > > > > > > > > > > > > > Just compare with a graphics card design, where on device memory > > > > > > > > > is mapped directly at some GPA not wasting RAM that guest could > > > > > > > > > use for other tasks. > > > > > > > > > > > > > > > > This might have been true 20 years ago. Most modern cards do DMA. > > > > > > > > > > > > > > Modern cards, with it's own RAM, map its VRAM in address space directly > > > > > > > and allow users use it (GEM API). So they do not waste conventional RAM. > > > > > > > For example NVIDIA VRAM is mapped as PCI BARs the same way like in this > > > > > > > series (even PCI class id is the same) > > > > > > > > > > > > Don't know enough about graphics really, I'm not sure how these are > > > > > > relevant. NICs and disks certainly do DMA. And virtio gl seems to > > > > > > mostly use guest RAM, not on card RAM. > > > > > > > > > > > > > > > VMGENID and NVDIMM use-cases look to me exactly the same, i.e. > > > > > > > > > instead of consuming guest's RAM they should be mapped at > > > > > > > > > some GPA and their memory accessed directly. > > > > > > > > > > > > > > > > VMGENID is tied to a spec that rather arbitrarily asks for a fixed > > > > > > > > address. This breaks the straight-forward approach of using a > > > > > > > > rebalanceable PCI BAR. > > > > > > > > > > > > > > For PCI rebalance to work on Windows, one has to provide working PCI driver > > > > > > > otherwise OS will ignore it when rebalancing happens and > > > > > > > might map something else over ignored BAR. > > > > > > > > > > > > Does it disable the BAR then? Or just move it elsewhere? > > > > > it doesn't, it just blindly ignores BARs existence and maps BAR of > > > > > another device with driver over it. > > > > > > > > Interesting. On classical PCI this is a forbidden configuration. > > > > Maybe we do something that confuses windows? > > > > Could you tell me how to reproduce this behaviour? > > > #cat > t << EOF > > > pci_update_mappings_del > > > pci_update_mappings_add > > > EOF > > > > > > #./x86_64-softmmu/qemu-system-x86_64 -snapshot -enable-kvm -snapshot \ > > > -monitor unix:/tmp/m,server,nowait -device pci-bridge,chassis_nr=1 \ > > > -boot menu=on -m 4G -trace events=t ws2012r2x64dc.img \ > > > -device ivshmem,id=foo,size=2M,shm,bus=pci.1,addr=01 > > > > > > wait till OS boots, note BARs programmed for ivshmem > > > in my case it was > > > 01:01.0 0,0xfe800000+0x100 > > > then execute script and watch pci_update_mappings* trace events > > > > > > # for i in $(seq 3 18); do printf -- "device_add e1000,bus=pci.1,addr=%x\n" $i | nc -U /tmp/m; sleep 5; done; > > > > > > hotplugging e1000,bus=pci.1,addr=12 triggers rebalancing where > > > Windows unmaps all BARs of nics on bridge but doesn't touch ivshmem > > > and then programs new BARs, where: > > > pci_update_mappings_add d=0x7fa02ff0cf90 01:11.0 0,0xfe800000+0x20000 > > > creates overlapping BAR with ivshmem > > > > > > Thanks! > > We need to figure this out because currently this does not > > work properly (or maybe it works, but merely by chance). > > Me and Marcel will play with this. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > In that case NVDIMM could even map whole label area and > > > > > > > > > significantly simplify QEMU<->OSPM protocol that currently > > > > > > > > > serializes that data through a 4K page. > > > > > > > > > There is also performance issue with buffer allocated in RAM, > > > > > > > > > because DMA adds unnecessary copying step when data could > > > > > > > > > be read/written directly of NVDIMM. > > > > > > > > > It might be no very important for _DSM interface but when it > > > > > > > > > comes to supporting block mode it can become an issue. > > > > > > > > > > > > > > > > So for NVDIMM, presumably it will have code access PCI BAR properly, so > > > > > > > > it's guaranteed to work across BAR rebalancing. > > > > > > > > Would that address the performance issue? > > > > > > > > > > > > > > it would if rebalancing were to account for driverless PCI device BARs, > > > > > > > but it doesn't hence such BARs need to be statically pinned > > > > > > > at place where BIOS put them at start up. > > > > > > > I'm also not sure that PCIConfig operation region would work > > > > > > > on Windows without loaded driver (similar to _DSM case). > > > > > > > > > > > > > > > > > > > > > > > Above points make ACPI patching approach not robust and fragile > > > > > > > > > and hard to maintain. > > > > > > > > > > > > > > > > Wrt GEN ID these are all kind of subjective though. I especially don't > > > > > > > > get what appears your general dislike of the linker host/guest > > > > > > > > interface. > > > > > > > Besides technical issues general dislike is just what I've written > > > > > > > "not robust and fragile" bios_linker_loader_add_pointer() interface. > > > > > > > > > > > > > > to make it less fragile: > > > > > > > 1. it should be impossible to corrupt memory or patch wrong address. > > > > > > > current impl. silently relies on value referenced by 'pointer' argument > > > > > > > and to figure that out one has to read linker code on BIOS side. > > > > > > > That could be easily set wrong and slip through review. > > > > > > > > > > > > That's an API issue, it seemed like a good idea but I guess > > > > > > it confuses people. Would you be happier using an offset > > > > > > instead of a pointer? > > > > > offset is better and it would be better if it were saying > > > > > which offset it is (i.e. relative to what) > > > > > > > > > > > > Start of table, right? > > > not sure, to me it looks like start of a blob and not the table > > > > Right that's what I meant. > > > > > > > > > > > > > > > > > > > API shouldn't rely on the caller setting value pointed by that argument. > > > > > > > > > > > > I couldn't parse that one. Care suggesting a cleaner API for linker? > > > > > here is current API signature: > > > > > > > > > > bios_linker_loader_add_pointer(GArray *linker, > > > > > const char *dest_file, > > > > > const char *src_file, > > > > > GArray *table, void *pointer, > > > > > uint8_t pointer_size) > > > > > > > > > > issue 1: > > > > > where 'pointer' is a real pointer pointing inside 'table' and API > > > > > calculates offset underhood: > > > > > offset = (gchar *)pointer - table->data; > > > > > and puts it in ADD_POINTER command. > > > > > > > > > > it's easy to get wrong offset if 'pointer' is not from 'table'. > > > > > > > > OK, replace that with table_offset? > > > blob_offset? > > > > > > also s/table/blob/ > > > > > > OK. > > > > > > > > > > > issue 2: > > > > > 'pointer' points to another offset of size 'pointer_size' in 'table' > > > > > blob, that means that whoever composes blob, has to aware of > > > > > it and fill correct value there which is possible to do right > > > > > if one looks inside of SeaBIOS part of linker interface. > > > > > Which is easy to forget and then one has to deal with mess > > > > > caused by random memory corruption. > > > > > > > > > > bios_linker_loader_add_pointer() and corresponding > > > > > ADD_POINTER command should take this second offset as argument > > > > > and do no require 'table' be pre-filled with it or > > > > > in worst case if of extending ADD_POINTER command is problematic > > > > > bios_linker_loader_add_pointer() should still take > > > > > the second offset and patch 'table' itself so that 'table' composer > > > > > don't have to worry about it. > > > > > > > > This one I don't understand. What's the second pointer you > > > > are talking about? > > > ha, see even the author already has absolutely no clue how linker works > > > and about what offsets are relative to. > > > see SeaBIOS romfile_loader_add_pointer(): > > > ... > > > memcpy(&pointer, dest_file->data + offset, entry->pointer_size); > > > here is the second offset ^^^^^^^^^^^^^ > > > > It's the same offset in the entry. > > struct { > > char pointer_dest_file[ROMFILE_LOADER_FILESZ]; > > char pointer_src_file[ROMFILE_LOADER_FILESZ]; > > u32 pointer_offset; > > u8 pointer_size; > > }; > pointer_offset == offset from above but question is what is result > of memcpy() and how it's used below vvvv It's just because seabios is trying to be endian-ness agnostic. So instead of patching value directly, we read the pointer value, add offset, then store the result back. > > > pointer = le64_to_cpu(pointer); > > > pointer += (unsigned long)src_file->data; > as you see *(foo_size *)(dest_file->data + offset) is the 2nd offset > relative to the beginning of src_file Sorry I don't see. Where's the second offset? I see a single one. > and current API requires > dst_file blob to contain valid value there of pointer_size. > i.e. author of AML have to prefill 2nd offset before passing > blob to bios_linker_loader_add_pointer() Since you say prefill, I am guessing what you refer to is the fact the add_pointer command adds a pointer to a current value in the dest blob, as opposed to overwriting it. So if you want the result to point at offset within source blob, you can put that offset within the pointer value. > which is rather fragile. > If it's difficult to make ADD_POINTER command pass that offset > as part of the command then it would be better to extend > bios_linker_loader_add_pointer() to take src_offset and write > it into blob instead of asking for AML author to do it manually > every time. Okay. so let's be more specific. This is what makes you unhappy I guess? rsdp->rsdt_physical_address = cpu_to_le32(rsdt); /* Address to be filled by Guest linker */ bios_linker_loader_add_pointer(linker, ACPI_BUILD_RSDP_FILE, ACPI_BUILD_TABLE_FILE, rsdp_table, &rsdp->rsdt_physical_address, sizeof rsdp->rsdt_physical_address); You need to remember to fill in the patched value. How would you improve on this? > And may by add bios_linker_loader_add_aml_pointer() which would be able > to check if it patches AML correctly while old bios_linker_loader_add_pointer() > wouldn't do check and work with raw tables. I'm not sure how that will be used. > > > > pointer = cpu_to_le64(pointer); > > > memcpy(dest_file->data + offset, &pointer, entry->pointer_size); > > > > > > all this src|dst_file and confusing offsets(whatever they might mean) > > > make me experience headache every time I need to remember how linker > > > works and read both QEMU and SeaBIOS code to figure it out each time. > > > That's what I'd call un-maintainable and hard to use API. > > > > Tight, there's lack of documentation. It's my fault, so let's fix it. > > It's an API issue, nothing to do with ABI. > > > > > > > > > > > > > > > > issue 3: > > > > > all patching obviously needs bounds checking on QEMU side > > > > > so it would abort early if it could corrupt memory. > > > > > > > > That's easy. > > > > > > > > > > > > > > > > > 2. If it's going to be used for patching AML, it should assert > > > > > > > when bios_linker_loader_add_pointer() is called if to be patched > > > > > > > AML object is wrong and patching would corrupt AML blob. > > > > > > > > > > > > Hmm for example check that the patched data has > > > > > > the expected pattern? > > > > > yep, nothing could be done for raw tables but that should be possible > > > > > for AML tables and if pattern is unsupported/size doesn't match > > > > > it should abort QEMU early instead of corrupting table. > > > > > > > > Above all sounds reasonable. Would you like to take a stub > > > > at it or prefer me to? > > > It would be better if it were you. > > > > > > I wouldn't like to maintain it ever as it's too complex and hard to use API, > > > which I'd use only as the last resort if there weren't any other way > > > to implement the task at hand. > > > > Sorry, I don't get it. You don't like the API, write a better one > > for an existing ABI. If you prefer waiting for me to fix it, > > that's fine too but no guarantees that you will like the new one > > or when it will happen. > > > > Look there have been 1 change (a bigfix for alignment) in several years > > since we added the linker. We don't maintain any compatiblity flags > > around it *at all*. It might have a hard to use API but that is the > > definition of easy to maintain. You are pushing allocating memory host > > side as an alternative, what happened there is the reverse. A ton of > > changes and pain all the way, and we get to maintain a bag of compat > > hacks for old machine types. You say we finally know what we are > > doing and won't have to change it any more. I'm not convinced. > You are talking (lots of compat issues) about manually mapped initial memory map, > while I'm talking about allocation in hotplug memory region. > It also had only one compat 'improvement' for alignment but otherwise > it wasn't a source of any problems. I'll have to look at the differences. Why don't they affect hotplug memory region? > > > > > > > > > > > > > > > > > > > > > It's there and we are not moving away from it, so why not > > > > > > > > use it in more places? Or if you think it's wrong, why don't you build > > > > > > > > something better then? We could then maybe use it for these things as > > > > > > > > well. > > > > > > > > > > > > > > Yep, I think for vmgenid and even more so for nvdimm > > > > > > > it would be better to allocate GPAs in QEMU and map backing > > > > > > > MemoryRegions directly in QEMU. > > > > > > > For nvdimm (main data region) > > > > > > > we already do it using pc-dimm's GPA allocation algorithm, we also > > > > > > > could use similar approach for nvdimm's label area and vmgenid. > > > > > > > > > > > > > > Here is a simple attempt to add a limited GPA allocator in high memory > > > > > > > https://patchwork.ozlabs.org/patch/540852/ > > > > > > > But it haven't got any comment from you and were ignored. > > > > > > > Lets consider it and perhaps we could come up with GPA allocator > > > > > > > that could be used for other things as well. > > > > > > > > > > > > For nvdimm label area, I agree passing things through > > > > > > a 4K buffer seems inefficient. > > > > > > > > > > > > I'm not sure what's a better way though. > > > > > > > > > > > > Use 64 bit memory? Setting aside old guests such as XP, > > > > > > does it break 32 bit guests? > > > > > it might not work with 32bit guests, the same way as mem hotplug > > > > > doesn't work for them unless they are PAE enabled. > > > > > > > > Right, I mean with PAE. > > > I've tested it with 32-bit XP and Windows 10, they boot fine and > > > vmgenid device is displayed as OK with buffer above 4Gb (on Win10). > > > So at least is doesn't crash guest. > > > I can't test more than that for 32 bit guests since utility > > > to read vmgenid works only on Windows Server and there isn't > > > a 32bit version of it. > > > > > > > > > > > > but well that's a limitation of implementation and considering > > > > > that storage nvdimm area is mapped at 64bit GPA it doesn't matter. > > > > > > > > > > > I'm really afraid of adding yet another allocator, I think you > > > > > > underestimate the maintainance headache: it's not theoretical and is > > > > > > already felt. > > > > > Current maintenance headache is due to fixed handpicked > > > > > mem layout, we can't do much with it for legacy machine > > > > > types but with QEMU side GPA allocator we can try to switch > > > > > to a flexible memory layout that would allocate GPA > > > > > depending on QEMU config in a stable manner. > > > > > > > > So far, we didn't manage to. It seems to go in the reverse > > > > direction were we add more and more control to let management > > > > influence the layout. Things like alignment requirements > > > > also tend to surface later and wreck havoc on whatever > > > > we do. > > > Was even there an attempt to try it before, could you point to it? > > > > Look at the mess we have with the existing allocator. > > As a way to fix this unmaintainable mess, what I see is suggestions > > to drop old machine types so people have to reinstall guests. > > This does not inspire confidence. > > > > > The only attempt I've seen was https://patchwork.ozlabs.org/patch/540852/ > > > but it haven't got any technical comments from you, > > > except of 'I'm afraid that it won't work' on IRC. > > > > > > QEMU already has GPA allocator limited to memory hotplug AS > > > and it has passed through 'growing' issues. What above patch > > > proposes is to reuse already existing memory hotplug AS and > > > maybe make its GPA allocator more generic (i.e. not tied > > > only to pc-dimm) on top of it. > > > > You say we finally know what we are doing and won't have to change it > > any more. I'm not convinced. > > > > > > > > It's sufficient for vmgenid use-case and a definitely > > > much more suitable for nvdimm which already uses it for mapping > > > main storage MemoryRegion. > > > > > > > > well there is maintenance headache with bios_linker as well > > > > > due to its complexity (multiple layers of indirection) and > > > > > it will grow when more places try to use it. > > > > > Yep we could use it as a hack, stealing RAM and trying implement > > > > > backwards DMA or we could be less afraid and consider > > > > > yet another allocator which will do the job without hacks > > > > > which should benefit QEMU in a long run (it might be not easy > > > > > to impl. it right but if we won't even try we would be buried > > > > > in complex hacks that 'work' for now) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And hey, if you want to use a pci device to pass the physical > > > > > > > > > > > > address guest to host, instead of reserving > > > > > > > > > > > > a couple of IO addresses, sure, stick it in pci config in > > > > > > > > > > > > a vendor-specific capability, this way it'll get migrated > > > > > > > > > > > > automatically. > > > > > > > > > > > Could you elaborate more on this suggestion? > > > > > > > > > > > > > > > > > > > > I really just mean using PCI_Config operation region. > > > > > > > > > > If you wish, I'll try to post a prototype next week. > > > > > > > > > I don't know much about PCI but it would be interesting, > > > > > > > > > perhaps we could use it somewhere else. > > > > > > > > > > > > > > > > > > However it should be checked if it works with Windows, > > > > > > > > > for example PCI specific _DSM method is ignored by it > > > > > > > > > if PCI device doesn't have working PCI driver bound to it. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --- > > > > > > > > > > > > > changes since 17: > > > > > > > > > > > > > - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > > > > - make BAR prefetchable to make region cached as per MS spec > > > > > > > > > > > > > - s/uuid/guid/ to match spec > > > > > > > > > > > > > changes since 14: > > > > > > > > > > > > > - reserve BAR resources so that Windows won't touch it > > > > > > > > > > > > > during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > > > > - ACPI: split VGEN device of PCI device descriptor > > > > > > > > > > > > > and place it at PCI0 scope, so that won't be need trace its > > > > > > > > > > > > > location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > > > > - permit only one vmgenid to be created > > > > > > > > > > > > > - enable BAR be mapped above 4Gb if it can't be mapped at low mem > > > > > > > > > > > > > --- > > > > > > > > > > > > > default-configs/i386-softmmu.mak | 1 + > > > > > > > > > > > > > default-configs/x86_64-softmmu.mak | 1 + > > > > > > > > > > > > > docs/specs/pci-ids.txt | 1 + > > > > > > > > > > > > > hw/i386/acpi-build.c | 56 +++++++++++++- > > > > > > > > > > > > > hw/misc/Makefile.objs | 1 + > > > > > > > > > > > > > hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ > > > > > > > > > > > > > include/hw/misc/vmgenid.h | 27 +++++++ > > > > > > > > > > > > > include/hw/pci/pci.h | 1 + > > > > > > > > > > > > > 8 files changed, 240 insertions(+), 2 deletions(-) > > > > > > > > > > > > > create mode 100644 hw/misc/vmgenid.c > > > > > > > > > > > > > create mode 100644 include/hw/misc/vmgenid.h > > > > > > > > > > > > > > > > > > > > > > > > > > diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak > > > > > > > > > > > > > index b177e52..6402439 100644 > > > > > > > > > > > > > --- a/default-configs/i386-softmmu.mak > > > > > > > > > > > > > +++ b/default-configs/i386-softmmu.mak > > > > > > > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > > > > > > > CONFIG_IOAPIC=y > > > > > > > > > > > > > CONFIG_PVPANIC=y > > > > > > > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > > > > > > > +CONFIG_VMGENID=y > > > > > > > > > > > > > CONFIG_NVDIMM=y > > > > > > > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > > > > > > > CONFIG_XIO3130=y > > > > > > > > > > > > > diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak > > > > > > > > > > > > > index 6e3b312..fdac18f 100644 > > > > > > > > > > > > > --- a/default-configs/x86_64-softmmu.mak > > > > > > > > > > > > > +++ b/default-configs/x86_64-softmmu.mak > > > > > > > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > > > > > > > CONFIG_IOAPIC=y > > > > > > > > > > > > > CONFIG_PVPANIC=y > > > > > > > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > > > > > > > +CONFIG_VMGENID=y > > > > > > > > > > > > > CONFIG_NVDIMM=y > > > > > > > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > > > > > > > CONFIG_XIO3130=y > > > > > > > > > > > > > diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt > > > > > > > > > > > > > index 0adcb89..e65ecf9 100644 > > > > > > > > > > > > > --- a/docs/specs/pci-ids.txt > > > > > > > > > > > > > +++ b/docs/specs/pci-ids.txt > > > > > > > > > > > > > @@ -47,6 +47,7 @@ PCI devices (other than virtio): > > > > > > > > > > > > > 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) > > > > > > > > > > > > > 1b36:0006 PCI Rocker Ethernet switch device > > > > > > > > > > > > > 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) > > > > > > > > > > > > > +1b36:0009 PCI VM-Generation device > > > > > > > > > > > > > 1b36:000a PCI-PCI bridge (multiseat) > > > > > > > > > > > > > > > > > > > > > > > > > > All these devices are documented in docs/specs. > > > > > > > > > > > > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c > > > > > > > > > > > > > index 78758e2..0187262 100644 > > > > > > > > > > > > > --- a/hw/i386/acpi-build.c > > > > > > > > > > > > > +++ b/hw/i386/acpi-build.c > > > > > > > > > > > > > @@ -44,6 +44,7 @@ > > > > > > > > > > > > > #include "hw/acpi/tpm.h" > > > > > > > > > > > > > #include "sysemu/tpm_backend.h" > > > > > > > > > > > > > #include "hw/timer/mc146818rtc_regs.h" > > > > > > > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > > > > > > > > > > > > > > > > > /* Supported chipsets: */ > > > > > > > > > > > > > #include "hw/acpi/piix4.h" > > > > > > > > > > > > > @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) > > > > > > > > > > > > > info->applesmc_io_base = applesmc_port(); > > > > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > > > > > > > +static Aml *build_vmgenid_device(uint64_t buf_paddr) > > > > > > > > > > > > > +{ > > > > > > > > > > > > > + Aml *dev, *pkg, *crs; > > > > > > > > > > > > > + > > > > > > > > > > > > > + dev = aml_device("VGEN"); > > > > > > > > > > > > > + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); > > > > > > > > > > > > > + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); > > > > > > > > > > > > > + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); > > > > > > > > > > > > > + > > > > > > > > > > > > > + pkg = aml_package(2); > > > > > > > > > > > > > + /* low 32 bits of UUID buffer addr */ > > > > > > > > > > > > > + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); > > > > > > > > > > > > > + /* high 32 bits of UUID buffer addr */ > > > > > > > > > > > > > + aml_append(pkg, aml_int(buf_paddr >> 32)); > > > > > > > > > > > > > + aml_append(dev, aml_name_decl("ADDR", pkg)); > > > > > > > > > > > > > + > > > > > > > > > > > > > + /* > > > > > > > > > > > > > + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows > > > > > > > > > > > > > + * displays it as "PCI RAM controller" which is marked as NO_DRV > > > > > > > > > > > > > + * so Windows ignores VMGEN device completely and doesn't check > > > > > > > > > > > > > + * for resource conflicts which during PCI rebalancing can lead > > > > > > > > > > > > > + * to another PCI device claiming ignored BARs. To prevent this > > > > > > > > > > > > > + * statically reserve resources used by VM_Gen_Counter. > > > > > > > > > > > > > + * For more verbose comment see this commit message. > > > > > > > > > > > > > > > > > > > > > > > > What does "this commit message" mean? > > > > > > > > > > > above commit message. Should I reword it to just 'see commit message' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + */ > > > > > > > > > > > > > + crs = aml_resource_template(); > > > > > > > > > > > > > + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, > > > > > > > > > > > > > + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, > > > > > > > > > > > > > + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, > > > > > > > > > > > > > + VMGENID_VMGID_BUF_SIZE)); > > > > > > > > > > > > > + aml_append(dev, aml_name_decl("_CRS", crs)); > > > > > > > > > > > > > + return dev; > > > > > > > > > > > > > +} > > > > > > > > > > > > > + > > > > > > > > > > > > > /* > > > > > > > > > > > > > * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. > > > > > > > > > > > > > * On i386 arch we only have two pci hosts, so we can look only for them. > > > > > > > > > > > > > @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > > > > > > > if (bus) { > > > > > > > > > > > > > + Object *vmgen; > > > > > > > > > > > > > Aml *scope = aml_scope("PCI0"); > > > > > > > > > > > > > /* Scan all PCI buses. Generate tables to support hotplug. */ > > > > > > > > > > > > > build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); > > > > > > > > > > > > > @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > > > > > > > aml_append(scope, dev); > > > > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > > > > > > > + vmgen = find_vmgneid_dev(NULL); > > > > > > > > > > > > > + if (vmgen) { > > > > > > > > > > > > > + PCIDevice *pdev = PCI_DEVICE(vmgen); > > > > > > > > > > > > > + uint64_t buf_paddr = > > > > > > > > > > > > > + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); > > > > > > > > > > > > > + > > > > > > > > > > > > > + if (buf_paddr != PCI_BAR_UNMAPPED) { > > > > > > > > > > > > > + aml_append(scope, build_vmgenid_device(buf_paddr)); > > > > > > > > > > > > > + > > > > > > > > > > > > > + method = aml_method("\\_GPE._E00", 0, > > > > > > > > > > > > > + AML_NOTSERIALIZED); > > > > > > > > > > > > > + aml_append(method, > > > > > > > > > > > > > + aml_notify(aml_name("\\_SB.PCI0.VGEN"), > > > > > > > > > > > > > + aml_int(0x80))); > > > > > > > > > > > > > + aml_append(ssdt, method); > > > > > > > > > > > > > + } > > > > > > > > > > > > > + } > > > > > > > > > > > > > + > > > > > > > > > > > > > aml_append(sb_scope, scope); > > > > > > > > > > > > > } > > > > > > > > > > > > > } > > > > > > > > > > > > > @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, > > > > > > > > > > > > > { > > > > > > > > > > > > > aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); > > > > > > > > > > > > > > > > > > > > > > > > > > - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); > > > > > > > > > > > > > - > > > > > > > > > > > > > if (misc->is_piix4) { > > > > > > > > > > > > > method = aml_method("_E01", 0, AML_NOTSERIALIZED); > > > > > > > > > > > > > aml_append(method, > > > > > > > > > > > > > diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs > > > > > > > > > > > > > index d4765c2..1f05edd 100644 > > > > > > > > > > > > > --- a/hw/misc/Makefile.objs > > > > > > > > > > > > > +++ b/hw/misc/Makefile.objs > > > > > > > > > > > > > @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o > > > > > > > > > > > > > > > > > > > > > > > > > > obj-$(CONFIG_PVPANIC) += pvpanic.o > > > > > > > > > > > > > obj-$(CONFIG_EDU) += edu.o > > > > > > > > > > > > > +obj-$(CONFIG_VMGENID) += vmgenid.o > > > > > > > > > > > > > obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o > > > > > > > > > > > > > diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c > > > > > > > > > > > > > new file mode 100644 > > > > > > > > > > > > > index 0000000..a2fbdfc > > > > > > > > > > > > > --- /dev/null > > > > > > > > > > > > > +++ b/hw/misc/vmgenid.c > > > > > > > > > > > > > @@ -0,0 +1,154 @@ > > > > > > > > > > > > > +/* > > > > > > > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > > > > > > > + * > > > > > > > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > > > > > > > + * > > > > > > > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > > > > > > > + * > > > > > > > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > > > > > > > + * > > > > > > > > > > > > > + */ > > > > > > > > > > > > > + > > > > > > > > > > > > > +#include "hw/i386/pc.h" > > > > > > > > > > > > > +#include "hw/pci/pci.h" > > > > > > > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > > > > +#include "hw/acpi/acpi.h" > > > > > > > > > > > > > +#include "qapi/visitor.h" > > > > > > > > > > > > > + > > > > > > > > > > > > > +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) > > > > > > > > > > > > > + > > > > > > > > > > > > > +typedef struct VmGenIdState { > > > > > > > > > > > > > + PCIDevice parent_obj; > > > > > > > > > > > > > + MemoryRegion iomem; > > > > > > > > > > > > > + union { > > > > > > > > > > > > > + uint8_t guid[16]; > > > > > > > > > > > > > + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; > > > > > > > > > > > > > + }; > > > > > > > > > > > > > + bool guid_set; > > > > > > > > > > > > > +} VmGenIdState; > > > > > > > > > > > > > + > > > > > > > > > > > > > +Object *find_vmgneid_dev(Error **errp) > > > > > > > > > > > > > +{ > > > > > > > > > > > > > + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); > > > > > > > > > > > > > + if (!obj) { > > > > > > > > > > > > > + error_setg(errp, VMGENID_DEVICE " is not found"); > > > > > > > > > > > > > + } > > > > > > > > > > > > > + return obj; > > > > > > > > > > > > > +} > > > > > > > > > > > > > + > > > > > > > > > > > > > +static void vmgenid_update_guest(VmGenIdState *s) > > > > > > > > > > > > > +{ > > > > > > > > > > > > > + Object *acpi_obj; > > > > > > > > > > > > > + void *ptr = memory_region_get_ram_ptr(&s->iomem); > > > > > > > > > > > > > + > > > > > > > > > > > > > + memcpy(ptr, &s->guid, sizeof(s->guid)); > > > > > > > > > > > > > + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); > > > > > > > > > > > > > + > > > > > > > > > > > > > + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); > > > > > > > > > > > > > + if (acpi_obj) { > > > > > > > > > > > > > + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); > > > > > > > > > > > > > + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); > > > > > > > > > > > > > + ACPIREGS *acpi_regs = adevc->regs(adev); > > > > > > > > > > > > > + > > > > > > > > > > > > > + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ > > > > > > > > > > > > > + acpi_update_sci(acpi_regs, adevc->sci(adev)); > > > > > > > > > > > > > + } > > > > > > > > > > > > > +} > > > > > > > > > > > > > + > > > > > > > > > > > > > +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) > > > > > > > > > > > > > +{ > > > > > > > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > > > > > > > + > > > > > > > > > > > > > + if (qemu_uuid_parse(value, s->guid) < 0) { > > > > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID > > > > > > > > > > > > > + "': Failed to parse GUID string: %s", > > > > > > > > > > > > > + object_get_typename(OBJECT(s)), > > > > > > > > > > > > > + value); > > > > > > > > > > > > > + return; > > > > > > > > > > > > > + } > > > > > > > > > > > > > + > > > > > > > > > > > > > + s->guid_set = true; > > > > > > > > > > > > > + vmgenid_update_guest(s); > > > > > > > > > > > > > +} > > > > > > > > > > > > > + > > > > > > > > > > > > > +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, > > > > > > > > > > > > > + const char *name, Error **errp) > > > > > > > > > > > > > +{ > > > > > > > > > > > > > + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); > > > > > > > > > > > > > + > > > > > > > > > > > > > + if (value == PCI_BAR_UNMAPPED) { > > > > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", > > > > > > > > > > > > > + object_get_typename(OBJECT(obj))); > > > > > > > > > > > > > + return; > > > > > > > > > > > > > + } > > > > > > > > > > > > > + visit_type_int(v, &value, name, errp); > > > > > > > > > > > > > +} > > > > > > > > > > > > > + > > > > > > > > > > > > > +static void vmgenid_initfn(Object *obj) > > > > > > > > > > > > > +{ > > > > > > > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > > > > > > > + > > > > > > > > > > > > > + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), > > > > > > > > > > > > > + &error_abort); > > > > > > > > > > > > > + > > > > > > > > > > > > > + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); > > > > > > > > > > > > > + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", > > > > > > > > > > > > > + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); > > > > > > > > > > > > > +} > > > > > > > > > > > > > + > > > > > > > > > > > > > + > > > > > > > > > > > > > +static void vmgenid_realize(PCIDevice *dev, Error **errp) > > > > > > > > > > > > > +{ > > > > > > > > > > > > > + VmGenIdState *s = VMGENID(dev); > > > > > > > > > > > > > + bool ambiguous = false; > > > > > > > > > > > > > + > > > > > > > > > > > > > + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); > > > > > > > > > > > > > + if (ambiguous) { > > > > > > > > > > > > > + error_setg(errp, "no more than one " VMGENID_DEVICE > > > > > > > > > > > > > + " device is permitted"); > > > > > > > > > > > > > + return; > > > > > > > > > > > > > + } > > > > > > > > > > > > > + > > > > > > > > > > > > > + if (!s->guid_set) { > > > > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", > > > > > > > > > > > > > + object_get_typename(OBJECT(s))); > > > > > > > > > > > > > + return; > > > > > > > > > > > > > + } > > > > > > > > > > > > > + > > > > > > > > > > > > > + vmstate_register_ram(&s->iomem, DEVICE(s)); > > > > > > > > > > > > > + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, > > > > > > > > > > > > > + PCI_BASE_ADDRESS_MEM_PREFETCH | > > > > > > > > > > > > > + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > > > > > > > > > > > > > + &s->iomem); > > > > > > > > > > > > > + return; > > > > > > > > > > > > > +} > > > > > > > > > > > > > + > > > > > > > > > > > > > +static void vmgenid_class_init(ObjectClass *klass, void *data) > > > > > > > > > > > > > +{ > > > > > > > > > > > > > + DeviceClass *dc = DEVICE_CLASS(klass); > > > > > > > > > > > > > + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); > > > > > > > > > > > > > + > > > > > > > > > > > > > + set_bit(DEVICE_CATEGORY_MISC, dc->categories); > > > > > > > > > > > > > + dc->hotpluggable = false; > > > > > > > > > > > > > + k->realize = vmgenid_realize; > > > > > > > > > > > > > + k->vendor_id = PCI_VENDOR_ID_REDHAT; > > > > > > > > > > > > > + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; > > > > > > > > > > > > > + k->class_id = PCI_CLASS_MEMORY_RAM; > > > > > > > > > > > > > +} > > > > > > > > > > > > > + > > > > > > > > > > > > > +static const TypeInfo vmgenid_device_info = { > > > > > > > > > > > > > + .name = VMGENID_DEVICE, > > > > > > > > > > > > > + .parent = TYPE_PCI_DEVICE, > > > > > > > > > > > > > + .instance_size = sizeof(VmGenIdState), > > > > > > > > > > > > > + .instance_init = vmgenid_initfn, > > > > > > > > > > > > > + .class_init = vmgenid_class_init, > > > > > > > > > > > > > +}; > > > > > > > > > > > > > + > > > > > > > > > > > > > +static void vmgenid_register_types(void) > > > > > > > > > > > > > +{ > > > > > > > > > > > > > + type_register_static(&vmgenid_device_info); > > > > > > > > > > > > > +} > > > > > > > > > > > > > + > > > > > > > > > > > > > +type_init(vmgenid_register_types) > > > > > > > > > > > > > diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h > > > > > > > > > > > > > new file mode 100644 > > > > > > > > > > > > > index 0000000..b90882c > > > > > > > > > > > > > --- /dev/null > > > > > > > > > > > > > +++ b/include/hw/misc/vmgenid.h > > > > > > > > > > > > > @@ -0,0 +1,27 @@ > > > > > > > > > > > > > +/* > > > > > > > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > > > > > > > + * > > > > > > > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > > > > > > > + * > > > > > > > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > > > > > > > + * > > > > > > > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > > > > > > > + * > > > > > > > > > > > > > + */ > > > > > > > > > > > > > + > > > > > > > > > > > > > +#ifndef HW_MISC_VMGENID_H > > > > > > > > > > > > > +#define HW_MISC_VMGENID_H > > > > > > > > > > > > > + > > > > > > > > > > > > > +#include "qom/object.h" > > > > > > > > > > > > > + > > > > > > > > > > > > > +#define VMGENID_DEVICE "vmgenid" > > > > > > > > > > > > > +#define VMGENID_GUID "guid" > > > > > > > > > > > > > +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" > > > > > > > > > > > > > +#define VMGENID_VMGID_BUF_SIZE 0x1000 > > > > > > > > > > > > > +#define VMGENID_VMGID_BUF_BAR 0 > > > > > > > > > > > > > + > > > > > > > > > > > > > +Object *find_vmgneid_dev(Error **errp); > > > > > > > > > > > > > + > > > > > > > > > > > > > +#endif > > > > > > > > > > > > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > > > > > > > > > > > > > index dedf277..f4c9d48 100644 > > > > > > > > > > > > > --- a/include/hw/pci/pci.h > > > > > > > > > > > > > +++ b/include/hw/pci/pci.h > > > > > > > > > > > > > @@ -94,6 +94,7 @@ > > > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 > > > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a > > > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b > > > > > > > > > > > > > +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c > > > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 > > > > > > > > > > > > > > > > > > > > > > > > > > #define FMT_PCIBUS PRIx64 > > > > > > > > > > > > > -- > > > > > > > > > > > > > 1.8.3.1 > > > > > > > > > > > >
On Mon, 15 Feb 2016 13:26:29 +0200 "Michael S. Tsirkin" <mst@redhat.com> wrote: > On Mon, Feb 15, 2016 at 11:30:24AM +0100, Igor Mammedov wrote: > > On Thu, 11 Feb 2016 18:30:19 +0200 > > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > > > On Thu, Feb 11, 2016 at 04:16:05PM +0100, Igor Mammedov wrote: > > > > On Tue, 9 Feb 2016 14:17:44 +0200 > > > > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > > > > > > > On Tue, Feb 09, 2016 at 11:46:08AM +0100, Igor Mammedov wrote: > > > > > > > So the linker interface solves this rather neatly: > > > > > > > bios allocates memory, bios passes memory map to guest. > > > > > > > Served us well for several years without need for extensions, > > > > > > > and it does solve the VM GEN ID problem, even though > > > > > > > 1. it was never designed for huge areas like nvdimm seems to want to use > > > > > > > 2. we might want to add a new 64 bit flag to avoid touching low memory > > > > > > linker interface is fine for some readonly data, like ACPI tables > > > > > > especially fixed tables not so for AML ones is one wants to patch it. > > > > > > > > > > > > However now when you want to use it for other purposes you start > > > > > > adding extensions and other guest->QEMU channels to communicate > > > > > > patching info back. > > > > > > It steals guest's memory which is also not nice and doesn't scale well. > > > > > > > > > > This is an argument I don't get. memory is memory. call it guest memory > > > > > or RAM backed PCI BAR - same thing. MMIO is cheaper of course > > > > > but much slower. > > > > > > > > > > ... > > > > It however matters for user, he pays for guest with XXX RAM but gets less > > > > than that. And that will be getting worse as a number of such devices > > > > increases. > > > > > > > > > > > OK fine, but returning PCI BAR address to guest is wrong. > > > > > > > How about reading it from ACPI then? Is it really > > > > > > > broken unless there's *also* a driver? > > > > > > I don't get question, MS Spec requires address (ADDR method), > > > > > > and it's read by ACPI (AML). > > > > > > > > > > You were unhappy about DMA into guest memory. > > > > > As a replacement for DMA, we could have AML read from > > > > > e.g. PCI and write into RAM. > > > > > This way we don't need to pass address to QEMU. > > > > That sounds better as it saves us from allocation of IO port > > > > and QEMU don't need to write into guest memory, the only question is > > > > if PCI_Config opregion would work with driver-less PCI device. > > > > > > Or PCI BAR for that reason. I don't know for sure. > > unfortunately BAR doesn't work for driver-less PCI device, > > but maybe we can add vendor specific PCI_Confog to always present > > LPC/ISA bridges and make it do the job like it does it for allocating > > IO ports for CPU/MEM hotplug now. > > > > > > > > > > > > > And it's still pretty much not test-able since it would require > > > > fully running OSPM to execute AML side. > > > > > > AML is not testable, but that's nothing new. > > > You can test reading from PCI. > > > > > > > > > > > > > > As for working PCI_Config OpRegion without driver, I haven't tried, > > > > > > but I wouldn't be surprised if it doesn't, taking in account that > > > > > > MS introduced _DSM doesn't. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Just compare with a graphics card design, where on device memory > > > > > > > > > > is mapped directly at some GPA not wasting RAM that guest could > > > > > > > > > > use for other tasks. > > > > > > > > > > > > > > > > > > This might have been true 20 years ago. Most modern cards do DMA. > > > > > > > > > > > > > > > > Modern cards, with it's own RAM, map its VRAM in address space directly > > > > > > > > and allow users use it (GEM API). So they do not waste conventional RAM. > > > > > > > > For example NVIDIA VRAM is mapped as PCI BARs the same way like in this > > > > > > > > series (even PCI class id is the same) > > > > > > > > > > > > > > Don't know enough about graphics really, I'm not sure how these are > > > > > > > relevant. NICs and disks certainly do DMA. And virtio gl seems to > > > > > > > mostly use guest RAM, not on card RAM. > > > > > > > > > > > > > > > > > VMGENID and NVDIMM use-cases look to me exactly the same, i.e. > > > > > > > > > > instead of consuming guest's RAM they should be mapped at > > > > > > > > > > some GPA and their memory accessed directly. > > > > > > > > > > > > > > > > > > VMGENID is tied to a spec that rather arbitrarily asks for a fixed > > > > > > > > > address. This breaks the straight-forward approach of using a > > > > > > > > > rebalanceable PCI BAR. > > > > > > > > > > > > > > > > For PCI rebalance to work on Windows, one has to provide working PCI driver > > > > > > > > otherwise OS will ignore it when rebalancing happens and > > > > > > > > might map something else over ignored BAR. > > > > > > > > > > > > > > Does it disable the BAR then? Or just move it elsewhere? > > > > > > it doesn't, it just blindly ignores BARs existence and maps BAR of > > > > > > another device with driver over it. > > > > > > > > > > Interesting. On classical PCI this is a forbidden configuration. > > > > > Maybe we do something that confuses windows? > > > > > Could you tell me how to reproduce this behaviour? > > > > #cat > t << EOF > > > > pci_update_mappings_del > > > > pci_update_mappings_add > > > > EOF > > > > > > > > #./x86_64-softmmu/qemu-system-x86_64 -snapshot -enable-kvm -snapshot \ > > > > -monitor unix:/tmp/m,server,nowait -device pci-bridge,chassis_nr=1 \ > > > > -boot menu=on -m 4G -trace events=t ws2012r2x64dc.img \ > > > > -device ivshmem,id=foo,size=2M,shm,bus=pci.1,addr=01 > > > > > > > > wait till OS boots, note BARs programmed for ivshmem > > > > in my case it was > > > > 01:01.0 0,0xfe800000+0x100 > > > > then execute script and watch pci_update_mappings* trace events > > > > > > > > # for i in $(seq 3 18); do printf -- "device_add e1000,bus=pci.1,addr=%x\n" $i | nc -U /tmp/m; sleep 5; done; > > > > > > > > hotplugging e1000,bus=pci.1,addr=12 triggers rebalancing where > > > > Windows unmaps all BARs of nics on bridge but doesn't touch ivshmem > > > > and then programs new BARs, where: > > > > pci_update_mappings_add d=0x7fa02ff0cf90 01:11.0 0,0xfe800000+0x20000 > > > > creates overlapping BAR with ivshmem > > > > > > > > > Thanks! > > > We need to figure this out because currently this does not > > > work properly (or maybe it works, but merely by chance). > > > Me and Marcel will play with this. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > In that case NVDIMM could even map whole label area and > > > > > > > > > > significantly simplify QEMU<->OSPM protocol that currently > > > > > > > > > > serializes that data through a 4K page. > > > > > > > > > > There is also performance issue with buffer allocated in RAM, > > > > > > > > > > because DMA adds unnecessary copying step when data could > > > > > > > > > > be read/written directly of NVDIMM. > > > > > > > > > > It might be no very important for _DSM interface but when it > > > > > > > > > > comes to supporting block mode it can become an issue. > > > > > > > > > > > > > > > > > > So for NVDIMM, presumably it will have code access PCI BAR properly, so > > > > > > > > > it's guaranteed to work across BAR rebalancing. > > > > > > > > > Would that address the performance issue? > > > > > > > > > > > > > > > > it would if rebalancing were to account for driverless PCI device BARs, > > > > > > > > but it doesn't hence such BARs need to be statically pinned > > > > > > > > at place where BIOS put them at start up. > > > > > > > > I'm also not sure that PCIConfig operation region would work > > > > > > > > on Windows without loaded driver (similar to _DSM case). > > > > > > > > > > > > > > > > > > > > > > > > > > Above points make ACPI patching approach not robust and fragile > > > > > > > > > > and hard to maintain. > > > > > > > > > > > > > > > > > > Wrt GEN ID these are all kind of subjective though. I especially don't > > > > > > > > > get what appears your general dislike of the linker host/guest > > > > > > > > > interface. > > > > > > > > Besides technical issues general dislike is just what I've written > > > > > > > > "not robust and fragile" bios_linker_loader_add_pointer() interface. > > > > > > > > > > > > > > > > to make it less fragile: > > > > > > > > 1. it should be impossible to corrupt memory or patch wrong address. > > > > > > > > current impl. silently relies on value referenced by 'pointer' argument > > > > > > > > and to figure that out one has to read linker code on BIOS side. > > > > > > > > That could be easily set wrong and slip through review. > > > > > > > > > > > > > > That's an API issue, it seemed like a good idea but I guess > > > > > > > it confuses people. Would you be happier using an offset > > > > > > > instead of a pointer? > > > > > > offset is better and it would be better if it were saying > > > > > > which offset it is (i.e. relative to what) > > > > > > > > > > > > > > > Start of table, right? > > > > not sure, to me it looks like start of a blob and not the table > > > > > > Right that's what I meant. > > > > > > > > > > > > > > > > > > > > > > > API shouldn't rely on the caller setting value pointed by that argument. > > > > > > > > > > > > > > I couldn't parse that one. Care suggesting a cleaner API for linker? > > > > > > here is current API signature: > > > > > > > > > > > > bios_linker_loader_add_pointer(GArray *linker, > > > > > > const char *dest_file, > > > > > > const char *src_file, > > > > > > GArray *table, void *pointer, > > > > > > uint8_t pointer_size) > > > > > > > > > > > > issue 1: > > > > > > where 'pointer' is a real pointer pointing inside 'table' and API > > > > > > calculates offset underhood: > > > > > > offset = (gchar *)pointer - table->data; > > > > > > and puts it in ADD_POINTER command. > > > > > > > > > > > > it's easy to get wrong offset if 'pointer' is not from 'table'. > > > > > > > > > > OK, replace that with table_offset? > > > > blob_offset? > > > > > > > > also s/table/blob/ > > > > > > > > > OK. > > > > > > > > > > > > > > issue 2: > > > > > > 'pointer' points to another offset of size 'pointer_size' in 'table' > > > > > > blob, that means that whoever composes blob, has to aware of > > > > > > it and fill correct value there which is possible to do right > > > > > > if one looks inside of SeaBIOS part of linker interface. > > > > > > Which is easy to forget and then one has to deal with mess > > > > > > caused by random memory corruption. > > > > > > > > > > > > bios_linker_loader_add_pointer() and corresponding > > > > > > ADD_POINTER command should take this second offset as argument > > > > > > and do no require 'table' be pre-filled with it or > > > > > > in worst case if of extending ADD_POINTER command is problematic > > > > > > bios_linker_loader_add_pointer() should still take > > > > > > the second offset and patch 'table' itself so that 'table' composer > > > > > > don't have to worry about it. > > > > > > > > > > This one I don't understand. What's the second pointer you > > > > > are talking about? > > > > ha, see even the author already has absolutely no clue how linker works > > > > and about what offsets are relative to. > > > > see SeaBIOS romfile_loader_add_pointer(): > > > > ... > > > > memcpy(&pointer, dest_file->data + offset, entry->pointer_size); > > > > here is the second offset ^^^^^^^^^^^^^ > > > > > > It's the same offset in the entry. > > > struct { > > > char pointer_dest_file[ROMFILE_LOADER_FILESZ]; > > > char pointer_src_file[ROMFILE_LOADER_FILESZ]; > > > u32 pointer_offset; > > > u8 pointer_size; > > > }; > > pointer_offset == offset from above but question is what is result > > of memcpy() and how it's used below vvvv > > It's just because seabios is trying to be endian-ness > agnostic. So instead of patching value directly, > we read the pointer value, add offset, then store > the result back. > > > > > pointer = le64_to_cpu(pointer); > > > > pointer += (unsigned long)src_file->data; > > as you see *(foo_size *)(dest_file->data + offset) is the 2nd offset > > relative to the beginning of src_file > > Sorry I don't see. Where's the second offset? I see a single one. > > > and current API requires > > dst_file blob to contain valid value there of pointer_size. > > i.e. author of AML have to prefill 2nd offset before passing > > blob to bios_linker_loader_add_pointer() > > Since you say prefill, I am guessing what you refer > to is the fact the add_pointer command adds > a pointer to a current value in the dest blob, > as opposed to overwriting it. > > So if you want the result to point at offset within source blob, > you can put that offset within the pointer value. > > > > > > which is rather fragile. > > If it's difficult to make ADD_POINTER command pass that offset > > as part of the command then it would be better to extend > > bios_linker_loader_add_pointer() to take src_offset and write > > it into blob instead of asking for AML author to do it manually > > every time. > > Okay. so let's be more specific. This is what makes you unhappy I guess? > > rsdp->rsdt_physical_address = cpu_to_le32(rsdt); > /* Address to be filled by Guest linker */ > bios_linker_loader_add_pointer(linker, ACPI_BUILD_RSDP_FILE, > ACPI_BUILD_TABLE_FILE, > rsdp_table, > &rsdp->rsdt_physical_address, > sizeof rsdp->rsdt_physical_address); > > You need to remember to fill in the patched value. > How would you improve on this? it would be better and more clear if user won't have use following line: rsdp->rsdt_physical_address = cpu_to_le32(rsdt); and signature would look like: void bios_linker_loader_add_pointer(GArray *linker, const char *dest_file, uint_foo_t dst_offset, uint8_t dst_patched_size, const char *src_file, uint_foo_t src_offset) so above would look like: bios_linker_loader_add_pointer(linker, ACPI_BUILD_RSDP_FILE, &rsdp->rsdt_physical_address - rsdp_table, sizeof rsdp->rsdt_physical_address, ACPI_BUILD_TABLE_FILE, rsdt_offset_in_TABLE_FILE); > > > > > And may by add bios_linker_loader_add_aml_pointer() which would be able > > to check if it patches AML correctly while old bios_linker_loader_add_pointer() > > wouldn't do check and work with raw tables. > > I'm not sure how that will be used. > > > > > > > > pointer = cpu_to_le64(pointer); > > > > memcpy(dest_file->data + offset, &pointer, entry->pointer_size); > > > > > > > > all this src|dst_file and confusing offsets(whatever they might mean) > > > > make me experience headache every time I need to remember how linker > > > > works and read both QEMU and SeaBIOS code to figure it out each time. > > > > That's what I'd call un-maintainable and hard to use API. > > > > > > Tight, there's lack of documentation. It's my fault, so let's fix it. > > > It's an API issue, nothing to do with ABI. > > > > > > > > > > > > > > > > > > > > > issue 3: > > > > > > all patching obviously needs bounds checking on QEMU side > > > > > > so it would abort early if it could corrupt memory. > > > > > > > > > > That's easy. > > > > > > > > > > > > > > > > > > > > 2. If it's going to be used for patching AML, it should assert > > > > > > > > when bios_linker_loader_add_pointer() is called if to be patched > > > > > > > > AML object is wrong and patching would corrupt AML blob. > > > > > > > > > > > > > > Hmm for example check that the patched data has > > > > > > > the expected pattern? > > > > > > yep, nothing could be done for raw tables but that should be possible > > > > > > for AML tables and if pattern is unsupported/size doesn't match > > > > > > it should abort QEMU early instead of corrupting table. > > > > > > > > > > Above all sounds reasonable. Would you like to take a stub > > > > > at it or prefer me to? > > > > It would be better if it were you. > > > > > > > > I wouldn't like to maintain it ever as it's too complex and hard to use API, > > > > which I'd use only as the last resort if there weren't any other way > > > > to implement the task at hand. > > > > > > Sorry, I don't get it. You don't like the API, write a better one > > > for an existing ABI. If you prefer waiting for me to fix it, > > > that's fine too but no guarantees that you will like the new one > > > or when it will happen. > > > > > > Look there have been 1 change (a bigfix for alignment) in several years > > > since we added the linker. We don't maintain any compatiblity flags > > > around it *at all*. It might have a hard to use API but that is the > > > definition of easy to maintain. You are pushing allocating memory host > > > side as an alternative, what happened there is the reverse. A ton of > > > changes and pain all the way, and we get to maintain a bag of compat > > > hacks for old machine types. You say we finally know what we are > > > doing and won't have to change it any more. I'm not convinced. > > You are talking (lots of compat issues) about manually mapped initial memory map, > > while I'm talking about allocation in hotplug memory region. > > It also had only one compat 'improvement' for alignment but otherwise > > it wasn't a source of any problems. > > I'll have to look at the differences. > Why don't they affect hotplug memory region? > > > > > > > > > > > > > > > > > > > > > > > > > It's there and we are not moving away from it, so why not > > > > > > > > > use it in more places? Or if you think it's wrong, why don't you build > > > > > > > > > something better then? We could then maybe use it for these things as > > > > > > > > > well. > > > > > > > > > > > > > > > > Yep, I think for vmgenid and even more so for nvdimm > > > > > > > > it would be better to allocate GPAs in QEMU and map backing > > > > > > > > MemoryRegions directly in QEMU. > > > > > > > > For nvdimm (main data region) > > > > > > > > we already do it using pc-dimm's GPA allocation algorithm, we also > > > > > > > > could use similar approach for nvdimm's label area and vmgenid. > > > > > > > > > > > > > > > > Here is a simple attempt to add a limited GPA allocator in high memory > > > > > > > > https://patchwork.ozlabs.org/patch/540852/ > > > > > > > > But it haven't got any comment from you and were ignored. > > > > > > > > Lets consider it and perhaps we could come up with GPA allocator > > > > > > > > that could be used for other things as well. > > > > > > > > > > > > > > For nvdimm label area, I agree passing things through > > > > > > > a 4K buffer seems inefficient. > > > > > > > > > > > > > > I'm not sure what's a better way though. > > > > > > > > > > > > > > Use 64 bit memory? Setting aside old guests such as XP, > > > > > > > does it break 32 bit guests? > > > > > > it might not work with 32bit guests, the same way as mem hotplug > > > > > > doesn't work for them unless they are PAE enabled. > > > > > > > > > > Right, I mean with PAE. > > > > I've tested it with 32-bit XP and Windows 10, they boot fine and > > > > vmgenid device is displayed as OK with buffer above 4Gb (on Win10). > > > > So at least is doesn't crash guest. > > > > I can't test more than that for 32 bit guests since utility > > > > to read vmgenid works only on Windows Server and there isn't > > > > a 32bit version of it. > > > > > > > > > > > > > > > but well that's a limitation of implementation and considering > > > > > > that storage nvdimm area is mapped at 64bit GPA it doesn't matter. > > > > > > > > > > > > > I'm really afraid of adding yet another allocator, I think you > > > > > > > underestimate the maintainance headache: it's not theoretical and is > > > > > > > already felt. > > > > > > Current maintenance headache is due to fixed handpicked > > > > > > mem layout, we can't do much with it for legacy machine > > > > > > types but with QEMU side GPA allocator we can try to switch > > > > > > to a flexible memory layout that would allocate GPA > > > > > > depending on QEMU config in a stable manner. > > > > > > > > > > So far, we didn't manage to. It seems to go in the reverse > > > > > direction were we add more and more control to let management > > > > > influence the layout. Things like alignment requirements > > > > > also tend to surface later and wreck havoc on whatever > > > > > we do. > > > > Was even there an attempt to try it before, could you point to it? > > > > > > Look at the mess we have with the existing allocator. > > > As a way to fix this unmaintainable mess, what I see is suggestions > > > to drop old machine types so people have to reinstall guests. > > > This does not inspire confidence. > > > > > > > The only attempt I've seen was https://patchwork.ozlabs.org/patch/540852/ > > > > but it haven't got any technical comments from you, > > > > except of 'I'm afraid that it won't work' on IRC. > > > > > > > > QEMU already has GPA allocator limited to memory hotplug AS > > > > and it has passed through 'growing' issues. What above patch > > > > proposes is to reuse already existing memory hotplug AS and > > > > maybe make its GPA allocator more generic (i.e. not tied > > > > only to pc-dimm) on top of it. > > > > > > You say we finally know what we are doing and won't have to change it > > > any more. I'm not convinced. > > > > > > > > > > > It's sufficient for vmgenid use-case and a definitely > > > > much more suitable for nvdimm which already uses it for mapping > > > > main storage MemoryRegion. > > > > > > > > > > well there is maintenance headache with bios_linker as well > > > > > > due to its complexity (multiple layers of indirection) and > > > > > > it will grow when more places try to use it. > > > > > > Yep we could use it as a hack, stealing RAM and trying implement > > > > > > backwards DMA or we could be less afraid and consider > > > > > > yet another allocator which will do the job without hacks > > > > > > which should benefit QEMU in a long run (it might be not easy > > > > > > to impl. it right but if we won't even try we would be buried > > > > > > in complex hacks that 'work' for now) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And hey, if you want to use a pci device to pass the physical > > > > > > > > > > > > > address guest to host, instead of reserving > > > > > > > > > > > > > a couple of IO addresses, sure, stick it in pci config in > > > > > > > > > > > > > a vendor-specific capability, this way it'll get migrated > > > > > > > > > > > > > automatically. > > > > > > > > > > > > Could you elaborate more on this suggestion? > > > > > > > > > > > > > > > > > > > > > > I really just mean using PCI_Config operation region. > > > > > > > > > > > If you wish, I'll try to post a prototype next week. > > > > > > > > > > I don't know much about PCI but it would be interesting, > > > > > > > > > > perhaps we could use it somewhere else. > > > > > > > > > > > > > > > > > > > > However it should be checked if it works with Windows, > > > > > > > > > > for example PCI specific _DSM method is ignored by it > > > > > > > > > > if PCI device doesn't have working PCI driver bound to it. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --- > > > > > > > > > > > > > > changes since 17: > > > > > > > > > > > > > > - small fixups suggested in v14 review by Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > > > > > - make BAR prefetchable to make region cached as per MS spec > > > > > > > > > > > > > > - s/uuid/guid/ to match spec > > > > > > > > > > > > > > changes since 14: > > > > > > > > > > > > > > - reserve BAR resources so that Windows won't touch it > > > > > > > > > > > > > > during PCI rebalancing - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > > > > > - ACPI: split VGEN device of PCI device descriptor > > > > > > > > > > > > > > and place it at PCI0 scope, so that won't be need trace its > > > > > > > > > > > > > > location on PCI buses. - "Michael S. Tsirkin" <mst@redhat.com> > > > > > > > > > > > > > > - permit only one vmgenid to be created > > > > > > > > > > > > > > - enable BAR be mapped above 4Gb if it can't be mapped at low mem > > > > > > > > > > > > > > --- > > > > > > > > > > > > > > default-configs/i386-softmmu.mak | 1 + > > > > > > > > > > > > > > default-configs/x86_64-softmmu.mak | 1 + > > > > > > > > > > > > > > docs/specs/pci-ids.txt | 1 + > > > > > > > > > > > > > > hw/i386/acpi-build.c | 56 +++++++++++++- > > > > > > > > > > > > > > hw/misc/Makefile.objs | 1 + > > > > > > > > > > > > > > hw/misc/vmgenid.c | 154 +++++++++++++++++++++++++++++++++++++ > > > > > > > > > > > > > > include/hw/misc/vmgenid.h | 27 +++++++ > > > > > > > > > > > > > > include/hw/pci/pci.h | 1 + > > > > > > > > > > > > > > 8 files changed, 240 insertions(+), 2 deletions(-) > > > > > > > > > > > > > > create mode 100644 hw/misc/vmgenid.c > > > > > > > > > > > > > > create mode 100644 include/hw/misc/vmgenid.h > > > > > > > > > > > > > > > > > > > > > > > > > > > > diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak > > > > > > > > > > > > > > index b177e52..6402439 100644 > > > > > > > > > > > > > > --- a/default-configs/i386-softmmu.mak > > > > > > > > > > > > > > +++ b/default-configs/i386-softmmu.mak > > > > > > > > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > > > > > > > > CONFIG_IOAPIC=y > > > > > > > > > > > > > > CONFIG_PVPANIC=y > > > > > > > > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > > > > > > > > +CONFIG_VMGENID=y > > > > > > > > > > > > > > CONFIG_NVDIMM=y > > > > > > > > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > > > > > > > > CONFIG_XIO3130=y > > > > > > > > > > > > > > diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak > > > > > > > > > > > > > > index 6e3b312..fdac18f 100644 > > > > > > > > > > > > > > --- a/default-configs/x86_64-softmmu.mak > > > > > > > > > > > > > > +++ b/default-configs/x86_64-softmmu.mak > > > > > > > > > > > > > > @@ -51,6 +51,7 @@ CONFIG_APIC=y > > > > > > > > > > > > > > CONFIG_IOAPIC=y > > > > > > > > > > > > > > CONFIG_PVPANIC=y > > > > > > > > > > > > > > CONFIG_MEM_HOTPLUG=y > > > > > > > > > > > > > > +CONFIG_VMGENID=y > > > > > > > > > > > > > > CONFIG_NVDIMM=y > > > > > > > > > > > > > > CONFIG_ACPI_NVDIMM=y > > > > > > > > > > > > > > CONFIG_XIO3130=y > > > > > > > > > > > > > > diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt > > > > > > > > > > > > > > index 0adcb89..e65ecf9 100644 > > > > > > > > > > > > > > --- a/docs/specs/pci-ids.txt > > > > > > > > > > > > > > +++ b/docs/specs/pci-ids.txt > > > > > > > > > > > > > > @@ -47,6 +47,7 @@ PCI devices (other than virtio): > > > > > > > > > > > > > > 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) > > > > > > > > > > > > > > 1b36:0006 PCI Rocker Ethernet switch device > > > > > > > > > > > > > > 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) > > > > > > > > > > > > > > +1b36:0009 PCI VM-Generation device > > > > > > > > > > > > > > 1b36:000a PCI-PCI bridge (multiseat) > > > > > > > > > > > > > > > > > > > > > > > > > > > > All these devices are documented in docs/specs. > > > > > > > > > > > > > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c > > > > > > > > > > > > > > index 78758e2..0187262 100644 > > > > > > > > > > > > > > --- a/hw/i386/acpi-build.c > > > > > > > > > > > > > > +++ b/hw/i386/acpi-build.c > > > > > > > > > > > > > > @@ -44,6 +44,7 @@ > > > > > > > > > > > > > > #include "hw/acpi/tpm.h" > > > > > > > > > > > > > > #include "sysemu/tpm_backend.h" > > > > > > > > > > > > > > #include "hw/timer/mc146818rtc_regs.h" > > > > > > > > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > > > > > > > > > > > > > > > > > > > /* Supported chipsets: */ > > > > > > > > > > > > > > #include "hw/acpi/piix4.h" > > > > > > > > > > > > > > @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) > > > > > > > > > > > > > > info->applesmc_io_base = applesmc_port(); > > > > > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > > > > > > > > > +static Aml *build_vmgenid_device(uint64_t buf_paddr) > > > > > > > > > > > > > > +{ > > > > > > > > > > > > > > + Aml *dev, *pkg, *crs; > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + dev = aml_device("VGEN"); > > > > > > > > > > > > > > + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); > > > > > > > > > > > > > > + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); > > > > > > > > > > > > > > + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + pkg = aml_package(2); > > > > > > > > > > > > > > + /* low 32 bits of UUID buffer addr */ > > > > > > > > > > > > > > + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); > > > > > > > > > > > > > > + /* high 32 bits of UUID buffer addr */ > > > > > > > > > > > > > > + aml_append(pkg, aml_int(buf_paddr >> 32)); > > > > > > > > > > > > > > + aml_append(dev, aml_name_decl("ADDR", pkg)); > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + /* > > > > > > > > > > > > > > + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows > > > > > > > > > > > > > > + * displays it as "PCI RAM controller" which is marked as NO_DRV > > > > > > > > > > > > > > + * so Windows ignores VMGEN device completely and doesn't check > > > > > > > > > > > > > > + * for resource conflicts which during PCI rebalancing can lead > > > > > > > > > > > > > > + * to another PCI device claiming ignored BARs. To prevent this > > > > > > > > > > > > > > + * statically reserve resources used by VM_Gen_Counter. > > > > > > > > > > > > > > + * For more verbose comment see this commit message. > > > > > > > > > > > > > > > > > > > > > > > > > > What does "this commit message" mean? > > > > > > > > > > > > above commit message. Should I reword it to just 'see commit message' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + */ > > > > > > > > > > > > > > + crs = aml_resource_template(); > > > > > > > > > > > > > > + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, > > > > > > > > > > > > > > + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, > > > > > > > > > > > > > > + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, > > > > > > > > > > > > > > + VMGENID_VMGID_BUF_SIZE)); > > > > > > > > > > > > > > + aml_append(dev, aml_name_decl("_CRS", crs)); > > > > > > > > > > > > > > + return dev; > > > > > > > > > > > > > > +} > > > > > > > > > > > > > > + > > > > > > > > > > > > > > /* > > > > > > > > > > > > > > * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. > > > > > > > > > > > > > > * On i386 arch we only have two pci hosts, so we can look only for them. > > > > > > > > > > > > > > @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > > > > > > > > > if (bus) { > > > > > > > > > > > > > > + Object *vmgen; > > > > > > > > > > > > > > Aml *scope = aml_scope("PCI0"); > > > > > > > > > > > > > > /* Scan all PCI buses. Generate tables to support hotplug. */ > > > > > > > > > > > > > > build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); > > > > > > > > > > > > > > @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, > > > > > > > > > > > > > > aml_append(scope, dev); > > > > > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > > > > > > > > > + vmgen = find_vmgneid_dev(NULL); > > > > > > > > > > > > > > + if (vmgen) { > > > > > > > > > > > > > > + PCIDevice *pdev = PCI_DEVICE(vmgen); > > > > > > > > > > > > > > + uint64_t buf_paddr = > > > > > > > > > > > > > > + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + if (buf_paddr != PCI_BAR_UNMAPPED) { > > > > > > > > > > > > > > + aml_append(scope, build_vmgenid_device(buf_paddr)); > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + method = aml_method("\\_GPE._E00", 0, > > > > > > > > > > > > > > + AML_NOTSERIALIZED); > > > > > > > > > > > > > > + aml_append(method, > > > > > > > > > > > > > > + aml_notify(aml_name("\\_SB.PCI0.VGEN"), > > > > > > > > > > > > > > + aml_int(0x80))); > > > > > > > > > > > > > > + aml_append(ssdt, method); > > > > > > > > > > > > > > + } > > > > > > > > > > > > > > + } > > > > > > > > > > > > > > + > > > > > > > > > > > > > > aml_append(sb_scope, scope); > > > > > > > > > > > > > > } > > > > > > > > > > > > > > } > > > > > > > > > > > > > > @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, > > > > > > > > > > > > > > { > > > > > > > > > > > > > > aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); > > > > > > > > > > > > > > > > > > > > > > > > > > > > - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); > > > > > > > > > > > > > > - > > > > > > > > > > > > > > if (misc->is_piix4) { > > > > > > > > > > > > > > method = aml_method("_E01", 0, AML_NOTSERIALIZED); > > > > > > > > > > > > > > aml_append(method, > > > > > > > > > > > > > > diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs > > > > > > > > > > > > > > index d4765c2..1f05edd 100644 > > > > > > > > > > > > > > --- a/hw/misc/Makefile.objs > > > > > > > > > > > > > > +++ b/hw/misc/Makefile.objs > > > > > > > > > > > > > > @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o > > > > > > > > > > > > > > > > > > > > > > > > > > > > obj-$(CONFIG_PVPANIC) += pvpanic.o > > > > > > > > > > > > > > obj-$(CONFIG_EDU) += edu.o > > > > > > > > > > > > > > +obj-$(CONFIG_VMGENID) += vmgenid.o > > > > > > > > > > > > > > obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o > > > > > > > > > > > > > > diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c > > > > > > > > > > > > > > new file mode 100644 > > > > > > > > > > > > > > index 0000000..a2fbdfc > > > > > > > > > > > > > > --- /dev/null > > > > > > > > > > > > > > +++ b/hw/misc/vmgenid.c > > > > > > > > > > > > > > @@ -0,0 +1,154 @@ > > > > > > > > > > > > > > +/* > > > > > > > > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > > > > > > > > + * > > > > > > > > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > > > > > > > > + * > > > > > > > > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > > > > > > > > + * > > > > > > > > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > > > > > > > > + * > > > > > > > > > > > > > > + */ > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +#include "hw/i386/pc.h" > > > > > > > > > > > > > > +#include "hw/pci/pci.h" > > > > > > > > > > > > > > +#include "hw/misc/vmgenid.h" > > > > > > > > > > > > > > +#include "hw/acpi/acpi.h" > > > > > > > > > > > > > > +#include "qapi/visitor.h" > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +typedef struct VmGenIdState { > > > > > > > > > > > > > > + PCIDevice parent_obj; > > > > > > > > > > > > > > + MemoryRegion iomem; > > > > > > > > > > > > > > + union { > > > > > > > > > > > > > > + uint8_t guid[16]; > > > > > > > > > > > > > > + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; > > > > > > > > > > > > > > + }; > > > > > > > > > > > > > > + bool guid_set; > > > > > > > > > > > > > > +} VmGenIdState; > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +Object *find_vmgneid_dev(Error **errp) > > > > > > > > > > > > > > +{ > > > > > > > > > > > > > > + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); > > > > > > > > > > > > > > + if (!obj) { > > > > > > > > > > > > > > + error_setg(errp, VMGENID_DEVICE " is not found"); > > > > > > > > > > > > > > + } > > > > > > > > > > > > > > + return obj; > > > > > > > > > > > > > > +} > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +static void vmgenid_update_guest(VmGenIdState *s) > > > > > > > > > > > > > > +{ > > > > > > > > > > > > > > + Object *acpi_obj; > > > > > > > > > > > > > > + void *ptr = memory_region_get_ram_ptr(&s->iomem); > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + memcpy(ptr, &s->guid, sizeof(s->guid)); > > > > > > > > > > > > > > + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); > > > > > > > > > > > > > > + if (acpi_obj) { > > > > > > > > > > > > > > + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); > > > > > > > > > > > > > > + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); > > > > > > > > > > > > > > + ACPIREGS *acpi_regs = adevc->regs(adev); > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ > > > > > > > > > > > > > > + acpi_update_sci(acpi_regs, adevc->sci(adev)); > > > > > > > > > > > > > > + } > > > > > > > > > > > > > > +} > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) > > > > > > > > > > > > > > +{ > > > > > > > > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + if (qemu_uuid_parse(value, s->guid) < 0) { > > > > > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID > > > > > > > > > > > > > > + "': Failed to parse GUID string: %s", > > > > > > > > > > > > > > + object_get_typename(OBJECT(s)), > > > > > > > > > > > > > > + value); > > > > > > > > > > > > > > + return; > > > > > > > > > > > > > > + } > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + s->guid_set = true; > > > > > > > > > > > > > > + vmgenid_update_guest(s); > > > > > > > > > > > > > > +} > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, > > > > > > > > > > > > > > + const char *name, Error **errp) > > > > > > > > > > > > > > +{ > > > > > > > > > > > > > > + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + if (value == PCI_BAR_UNMAPPED) { > > > > > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", > > > > > > > > > > > > > > + object_get_typename(OBJECT(obj))); > > > > > > > > > > > > > > + return; > > > > > > > > > > > > > > + } > > > > > > > > > > > > > > + visit_type_int(v, &value, name, errp); > > > > > > > > > > > > > > +} > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +static void vmgenid_initfn(Object *obj) > > > > > > > > > > > > > > +{ > > > > > > > > > > > > > > + VmGenIdState *s = VMGENID(obj); > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), > > > > > > > > > > > > > > + &error_abort); > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); > > > > > > > > > > > > > > + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", > > > > > > > > > > > > > > + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); > > > > > > > > > > > > > > +} > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +static void vmgenid_realize(PCIDevice *dev, Error **errp) > > > > > > > > > > > > > > +{ > > > > > > > > > > > > > > + VmGenIdState *s = VMGENID(dev); > > > > > > > > > > > > > > + bool ambiguous = false; > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); > > > > > > > > > > > > > > + if (ambiguous) { > > > > > > > > > > > > > > + error_setg(errp, "no more than one " VMGENID_DEVICE > > > > > > > > > > > > > > + " device is permitted"); > > > > > > > > > > > > > > + return; > > > > > > > > > > > > > > + } > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + if (!s->guid_set) { > > > > > > > > > > > > > > + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", > > > > > > > > > > > > > > + object_get_typename(OBJECT(s))); > > > > > > > > > > > > > > + return; > > > > > > > > > > > > > > + } > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + vmstate_register_ram(&s->iomem, DEVICE(s)); > > > > > > > > > > > > > > + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, > > > > > > > > > > > > > > + PCI_BASE_ADDRESS_MEM_PREFETCH | > > > > > > > > > > > > > > + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > > > > > > > > > > > > > > + &s->iomem); > > > > > > > > > > > > > > + return; > > > > > > > > > > > > > > +} > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +static void vmgenid_class_init(ObjectClass *klass, void *data) > > > > > > > > > > > > > > +{ > > > > > > > > > > > > > > + DeviceClass *dc = DEVICE_CLASS(klass); > > > > > > > > > > > > > > + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); > > > > > > > > > > > > > > + > > > > > > > > > > > > > > + set_bit(DEVICE_CATEGORY_MISC, dc->categories); > > > > > > > > > > > > > > + dc->hotpluggable = false; > > > > > > > > > > > > > > + k->realize = vmgenid_realize; > > > > > > > > > > > > > > + k->vendor_id = PCI_VENDOR_ID_REDHAT; > > > > > > > > > > > > > > + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; > > > > > > > > > > > > > > + k->class_id = PCI_CLASS_MEMORY_RAM; > > > > > > > > > > > > > > +} > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +static const TypeInfo vmgenid_device_info = { > > > > > > > > > > > > > > + .name = VMGENID_DEVICE, > > > > > > > > > > > > > > + .parent = TYPE_PCI_DEVICE, > > > > > > > > > > > > > > + .instance_size = sizeof(VmGenIdState), > > > > > > > > > > > > > > + .instance_init = vmgenid_initfn, > > > > > > > > > > > > > > + .class_init = vmgenid_class_init, > > > > > > > > > > > > > > +}; > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +static void vmgenid_register_types(void) > > > > > > > > > > > > > > +{ > > > > > > > > > > > > > > + type_register_static(&vmgenid_device_info); > > > > > > > > > > > > > > +} > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +type_init(vmgenid_register_types) > > > > > > > > > > > > > > diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h > > > > > > > > > > > > > > new file mode 100644 > > > > > > > > > > > > > > index 0000000..b90882c > > > > > > > > > > > > > > --- /dev/null > > > > > > > > > > > > > > +++ b/include/hw/misc/vmgenid.h > > > > > > > > > > > > > > @@ -0,0 +1,27 @@ > > > > > > > > > > > > > > +/* > > > > > > > > > > > > > > + * Virtual Machine Generation ID Device > > > > > > > > > > > > > > + * > > > > > > > > > > > > > > + * Copyright (C) 2016 Red Hat Inc. > > > > > > > > > > > > > > + * > > > > > > > > > > > > > > + * Authors: Gal Hammer <ghammer@redhat.com> > > > > > > > > > > > > > > + * Igor Mammedov <imammedo@redhat.com> > > > > > > > > > > > > > > + * > > > > > > > > > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > > > > > > > > > > > > > > + * See the COPYING file in the top-level directory. > > > > > > > > > > > > > > + * > > > > > > > > > > > > > > + */ > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +#ifndef HW_MISC_VMGENID_H > > > > > > > > > > > > > > +#define HW_MISC_VMGENID_H > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +#include "qom/object.h" > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +#define VMGENID_DEVICE "vmgenid" > > > > > > > > > > > > > > +#define VMGENID_GUID "guid" > > > > > > > > > > > > > > +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" > > > > > > > > > > > > > > +#define VMGENID_VMGID_BUF_SIZE 0x1000 > > > > > > > > > > > > > > +#define VMGENID_VMGID_BUF_BAR 0 > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +Object *find_vmgneid_dev(Error **errp); > > > > > > > > > > > > > > + > > > > > > > > > > > > > > +#endif > > > > > > > > > > > > > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > > > > > > > > > > > > > > index dedf277..f4c9d48 100644 > > > > > > > > > > > > > > --- a/include/hw/pci/pci.h > > > > > > > > > > > > > > +++ b/include/hw/pci/pci.h > > > > > > > > > > > > > > @@ -94,6 +94,7 @@ > > > > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 > > > > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a > > > > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b > > > > > > > > > > > > > > +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c > > > > > > > > > > > > > > #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 > > > > > > > > > > > > > > > > > > > > > > > > > > > > #define FMT_PCIBUS PRIx64 > > > > > > > > > > > > > > -- > > > > > > > > > > > > > > 1.8.3.1 > > > > > > > > > > > > > > >
On 02/11/2016 06:30 PM, Michael S. Tsirkin wrote: > On Thu, Feb 11, 2016 at 04:16:05PM +0100, Igor Mammedov wrote: >> On Tue, 9 Feb 2016 14:17:44 +0200 >> "Michael S. Tsirkin" <mst@redhat.com> wrote: >> >>> On Tue, Feb 09, 2016 at 11:46:08AM +0100, Igor Mammedov wrote: >>>>> So the linker interface solves this rather neatly: >>>>> bios allocates memory, bios passes memory map to guest. >>>>> Served us well for several years without need for extensions, >>>>> and it does solve the VM GEN ID problem, even though >>>>> 1. it was never designed for huge areas like nvdimm seems to want to use >>>>> 2. we might want to add a new 64 bit flag to avoid touching low memory >>>> linker interface is fine for some readonly data, like ACPI tables >>>> especially fixed tables not so for AML ones is one wants to patch it. >>>> >>>> However now when you want to use it for other purposes you start >>>> adding extensions and other guest->QEMU channels to communicate >>>> patching info back. >>>> It steals guest's memory which is also not nice and doesn't scale well. >>> >>> This is an argument I don't get. memory is memory. call it guest memory >>> or RAM backed PCI BAR - same thing. MMIO is cheaper of course >>> but much slower. >>> >>> ... >> It however matters for user, he pays for guest with XXX RAM but gets less >> than that. And that will be getting worse as a number of such devices >> increases. >> >>>>> OK fine, but returning PCI BAR address to guest is wrong. >>>>> How about reading it from ACPI then? Is it really >>>>> broken unless there's *also* a driver? >>>> I don't get question, MS Spec requires address (ADDR method), >>>> and it's read by ACPI (AML). >>> >>> You were unhappy about DMA into guest memory. >>> As a replacement for DMA, we could have AML read from >>> e.g. PCI and write into RAM. >>> This way we don't need to pass address to QEMU. >> That sounds better as it saves us from allocation of IO port >> and QEMU don't need to write into guest memory, the only question is >> if PCI_Config opregion would work with driver-less PCI device. > > Or PCI BAR for that reason. I don't know for sure. > >> >> And it's still pretty much not test-able since it would require >> fully running OSPM to execute AML side. > > AML is not testable, but that's nothing new. > You can test reading from PCI. > >>> >>>> As for working PCI_Config OpRegion without driver, I haven't tried, >>>> but I wouldn't be surprised if it doesn't, taking in account that >>>> MS introduced _DSM doesn't. >>>> >>>>> >>>>> >>>>>>>> Just compare with a graphics card design, where on device memory >>>>>>>> is mapped directly at some GPA not wasting RAM that guest could >>>>>>>> use for other tasks. >>>>>>> >>>>>>> This might have been true 20 years ago. Most modern cards do DMA. >>>>>> >>>>>> Modern cards, with it's own RAM, map its VRAM in address space directly >>>>>> and allow users use it (GEM API). So they do not waste conventional RAM. >>>>>> For example NVIDIA VRAM is mapped as PCI BARs the same way like in this >>>>>> series (even PCI class id is the same) >>>>> >>>>> Don't know enough about graphics really, I'm not sure how these are >>>>> relevant. NICs and disks certainly do DMA. And virtio gl seems to >>>>> mostly use guest RAM, not on card RAM. >>>>> >>>>>>>> VMGENID and NVDIMM use-cases look to me exactly the same, i.e. >>>>>>>> instead of consuming guest's RAM they should be mapped at >>>>>>>> some GPA and their memory accessed directly. >>>>>>> >>>>>>> VMGENID is tied to a spec that rather arbitrarily asks for a fixed >>>>>>> address. This breaks the straight-forward approach of using a >>>>>>> rebalanceable PCI BAR. >>>>>> >>>>>> For PCI rebalance to work on Windows, one has to provide working PCI driver >>>>>> otherwise OS will ignore it when rebalancing happens and >>>>>> might map something else over ignored BAR. >>>>> >>>>> Does it disable the BAR then? Or just move it elsewhere? >>>> it doesn't, it just blindly ignores BARs existence and maps BAR of >>>> another device with driver over it. >>> >>> Interesting. On classical PCI this is a forbidden configuration. >>> Maybe we do something that confuses windows? >>> Could you tell me how to reproduce this behaviour? >> #cat > t << EOF >> pci_update_mappings_del >> pci_update_mappings_add >> EOF >> >> #./x86_64-softmmu/qemu-system-x86_64 -snapshot -enable-kvm -snapshot \ >> -monitor unix:/tmp/m,server,nowait -device pci-bridge,chassis_nr=1 \ >> -boot menu=on -m 4G -trace events=t ws2012r2x64dc.img \ >> -device ivshmem,id=foo,size=2M,shm,bus=pci.1,addr=01 >> >> wait till OS boots, note BARs programmed for ivshmem >> in my case it was >> 01:01.0 0,0xfe800000+0x100 >> then execute script and watch pci_update_mappings* trace events >> >> # for i in $(seq 3 18); do printf -- "device_add e1000,bus=pci.1,addr=%x\n" $i | nc -U /tmp/m; sleep 5; done; >> >> hotplugging e1000,bus=pci.1,addr=12 triggers rebalancing where >> Windows unmaps all BARs of nics on bridge but doesn't touch ivshmem >> and then programs new BARs, where: >> pci_update_mappings_add d=0x7fa02ff0cf90 01:11.0 0,0xfe800000+0x20000 >> creates overlapping BAR with ivshmem > > > Thanks! > We need to figure this out because currently this does not > work properly (or maybe it works, but merely by chance). > Me and Marcel will play with this. > I checked and indeed we have 2 separate problems: 1. ivshmem is declared as PCI RAM controller and Windows *does* have the drivers for it, however it is not remapped on re-balancing. You can see on Device Manage 2 working devices with the same MMIO region - strange! This may be because PCI RAM controllers can't be re-mapped? Even then, it should not be overridden. Maybe we need to add a clue to the OS in ACPI regarding this range? 2. PCI devices with no driver installed are not re-mapped. This can be OK from the Windows point of view because Resources Window does not show the MMIO range for this device. If the other (re-mapped) device is working, is pure luck. Both Memory Regions occupy the same range and have the same priority. We need to think about how to solve this. One way would be to defer the BAR activation to the guest OS, but I am not sure of the consequences. And this does not solve the ivshmem problem. Thanks, Marcel
On Tue, 16 Feb 2016 12:05:33 +0200 Marcel Apfelbaum <marcel@redhat.com> wrote: > On 02/11/2016 06:30 PM, Michael S. Tsirkin wrote: > > On Thu, Feb 11, 2016 at 04:16:05PM +0100, Igor Mammedov wrote: > >> On Tue, 9 Feb 2016 14:17:44 +0200 > >> "Michael S. Tsirkin" <mst@redhat.com> wrote: > >> > >>> On Tue, Feb 09, 2016 at 11:46:08AM +0100, Igor Mammedov wrote: > >>>>> So the linker interface solves this rather neatly: > >>>>> bios allocates memory, bios passes memory map to guest. > >>>>> Served us well for several years without need for extensions, > >>>>> and it does solve the VM GEN ID problem, even though > >>>>> 1. it was never designed for huge areas like nvdimm seems to want to use > >>>>> 2. we might want to add a new 64 bit flag to avoid touching low memory > >>>> linker interface is fine for some readonly data, like ACPI tables > >>>> especially fixed tables not so for AML ones is one wants to patch it. > >>>> > >>>> However now when you want to use it for other purposes you start > >>>> adding extensions and other guest->QEMU channels to communicate > >>>> patching info back. > >>>> It steals guest's memory which is also not nice and doesn't scale well. > >>> > >>> This is an argument I don't get. memory is memory. call it guest memory > >>> or RAM backed PCI BAR - same thing. MMIO is cheaper of course > >>> but much slower. > >>> > >>> ... > >> It however matters for user, he pays for guest with XXX RAM but gets less > >> than that. And that will be getting worse as a number of such devices > >> increases. > >> > >>>>> OK fine, but returning PCI BAR address to guest is wrong. > >>>>> How about reading it from ACPI then? Is it really > >>>>> broken unless there's *also* a driver? > >>>> I don't get question, MS Spec requires address (ADDR method), > >>>> and it's read by ACPI (AML). > >>> > >>> You were unhappy about DMA into guest memory. > >>> As a replacement for DMA, we could have AML read from > >>> e.g. PCI and write into RAM. > >>> This way we don't need to pass address to QEMU. > >> That sounds better as it saves us from allocation of IO port > >> and QEMU don't need to write into guest memory, the only question is > >> if PCI_Config opregion would work with driver-less PCI device. > > > > Or PCI BAR for that reason. I don't know for sure. > > > >> > >> And it's still pretty much not test-able since it would require > >> fully running OSPM to execute AML side. > > > > AML is not testable, but that's nothing new. > > You can test reading from PCI. > > > >>> > >>>> As for working PCI_Config OpRegion without driver, I haven't tried, > >>>> but I wouldn't be surprised if it doesn't, taking in account that > >>>> MS introduced _DSM doesn't. > >>>> > >>>>> > >>>>> > >>>>>>>> Just compare with a graphics card design, where on device memory > >>>>>>>> is mapped directly at some GPA not wasting RAM that guest could > >>>>>>>> use for other tasks. > >>>>>>> > >>>>>>> This might have been true 20 years ago. Most modern cards do DMA. > >>>>>> > >>>>>> Modern cards, with it's own RAM, map its VRAM in address space directly > >>>>>> and allow users use it (GEM API). So they do not waste conventional RAM. > >>>>>> For example NVIDIA VRAM is mapped as PCI BARs the same way like in this > >>>>>> series (even PCI class id is the same) > >>>>> > >>>>> Don't know enough about graphics really, I'm not sure how these are > >>>>> relevant. NICs and disks certainly do DMA. And virtio gl seems to > >>>>> mostly use guest RAM, not on card RAM. > >>>>> > >>>>>>>> VMGENID and NVDIMM use-cases look to me exactly the same, i.e. > >>>>>>>> instead of consuming guest's RAM they should be mapped at > >>>>>>>> some GPA and their memory accessed directly. > >>>>>>> > >>>>>>> VMGENID is tied to a spec that rather arbitrarily asks for a fixed > >>>>>>> address. This breaks the straight-forward approach of using a > >>>>>>> rebalanceable PCI BAR. > >>>>>> > >>>>>> For PCI rebalance to work on Windows, one has to provide working PCI driver > >>>>>> otherwise OS will ignore it when rebalancing happens and > >>>>>> might map something else over ignored BAR. > >>>>> > >>>>> Does it disable the BAR then? Or just move it elsewhere? > >>>> it doesn't, it just blindly ignores BARs existence and maps BAR of > >>>> another device with driver over it. > >>> > >>> Interesting. On classical PCI this is a forbidden configuration. > >>> Maybe we do something that confuses windows? > >>> Could you tell me how to reproduce this behaviour? > >> #cat > t << EOF > >> pci_update_mappings_del > >> pci_update_mappings_add > >> EOF > >> > >> #./x86_64-softmmu/qemu-system-x86_64 -snapshot -enable-kvm -snapshot \ > >> -monitor unix:/tmp/m,server,nowait -device pci-bridge,chassis_nr=1 \ > >> -boot menu=on -m 4G -trace events=t ws2012r2x64dc.img \ > >> -device ivshmem,id=foo,size=2M,shm,bus=pci.1,addr=01 > >> > >> wait till OS boots, note BARs programmed for ivshmem > >> in my case it was > >> 01:01.0 0,0xfe800000+0x100 > >> then execute script and watch pci_update_mappings* trace events > >> > >> # for i in $(seq 3 18); do printf -- "device_add e1000,bus=pci.1,addr=%x\n" $i | nc -U /tmp/m; sleep 5; done; > >> > >> hotplugging e1000,bus=pci.1,addr=12 triggers rebalancing where > >> Windows unmaps all BARs of nics on bridge but doesn't touch ivshmem > >> and then programs new BARs, where: > >> pci_update_mappings_add d=0x7fa02ff0cf90 01:11.0 0,0xfe800000+0x20000 > >> creates overlapping BAR with ivshmem > > > > > > Thanks! > > We need to figure this out because currently this does not > > work properly (or maybe it works, but merely by chance). > > Me and Marcel will play with this. > > > > I checked and indeed we have 2 separate problems: > > 1. ivshmem is declared as PCI RAM controller and Windows *does* have the drivers > for it, however it is not remapped on re-balancing. Does it really have a driver, i.e ivshmem specific one? It should have its own driver or otherwise userspace won't be able to access/work with it and it would be pointless to add such device to machine. > You can see on Device Manage 2 working devices with the same MMIO region - strange! > This may be because PCI RAM controllers can't be re-mapped? Even then, it should not be overridden. > Maybe we need to add a clue to the OS in ACPI regarding this range? > > 2. PCI devices with no driver installed are not re-mapped. This can be OK > from the Windows point of view because Resources Window does not show the MMIO range > for this device. > > If the other (re-mapped) device is working, is pure luck. Both Memory Regions occupy the same range > and have the same priority. > > We need to think about how to solve this. > One way would be to defer the BAR activation to the guest OS, but I am not sure of the consequences. deferring won't solve problem as rebalancing could happen later and make BARs overlap. I've noticed that at startup Windows unmaps and then maps BARs at the same addresses where BIOS've put them before. > And this does not solve the ivshmem problem. So far the only way to avoid overlapping BARs due to Windows doing rebalancing for driver-less devices is to pin such BARs statically with _CRS in ACPI table but as Michael said it fragments PCI address-space. > > Thanks, > Marcel > > > > >
On 02/16/2016 02:17 PM, Igor Mammedov wrote: > On Tue, 16 Feb 2016 12:05:33 +0200 > Marcel Apfelbaum <marcel@redhat.com> wrote: > >> On 02/11/2016 06:30 PM, Michael S. Tsirkin wrote: >>> On Thu, Feb 11, 2016 at 04:16:05PM +0100, Igor Mammedov wrote: >>>> On Tue, 9 Feb 2016 14:17:44 +0200 >>>> "Michael S. Tsirkin" <mst@redhat.com> wrote: >>>> >>>>> On Tue, Feb 09, 2016 at 11:46:08AM +0100, Igor Mammedov wrote: >>>>>>> So the linker interface solves this rather neatly: >>>>>>> bios allocates memory, bios passes memory map to guest. >>>>>>> Served us well for several years without need for extensions, >>>>>>> and it does solve the VM GEN ID problem, even though >>>>>>> 1. it was never designed for huge areas like nvdimm seems to want to use >>>>>>> 2. we might want to add a new 64 bit flag to avoid touching low memory >>>>>> linker interface is fine for some readonly data, like ACPI tables >>>>>> especially fixed tables not so for AML ones is one wants to patch it. >>>>>> >>>>>> However now when you want to use it for other purposes you start >>>>>> adding extensions and other guest->QEMU channels to communicate >>>>>> patching info back. >>>>>> It steals guest's memory which is also not nice and doesn't scale well. >>>>> >>>>> This is an argument I don't get. memory is memory. call it guest memory >>>>> or RAM backed PCI BAR - same thing. MMIO is cheaper of course >>>>> but much slower. >>>>> >>>>> ... >>>> It however matters for user, he pays for guest with XXX RAM but gets less >>>> than that. And that will be getting worse as a number of such devices >>>> increases. >>>> >>>>>>> OK fine, but returning PCI BAR address to guest is wrong. >>>>>>> How about reading it from ACPI then? Is it really >>>>>>> broken unless there's *also* a driver? >>>>>> I don't get question, MS Spec requires address (ADDR method), >>>>>> and it's read by ACPI (AML). >>>>> >>>>> You were unhappy about DMA into guest memory. >>>>> As a replacement for DMA, we could have AML read from >>>>> e.g. PCI and write into RAM. >>>>> This way we don't need to pass address to QEMU. >>>> That sounds better as it saves us from allocation of IO port >>>> and QEMU don't need to write into guest memory, the only question is >>>> if PCI_Config opregion would work with driver-less PCI device. >>> >>> Or PCI BAR for that reason. I don't know for sure. >>> >>>> >>>> And it's still pretty much not test-able since it would require >>>> fully running OSPM to execute AML side. >>> >>> AML is not testable, but that's nothing new. >>> You can test reading from PCI. >>> >>>>> >>>>>> As for working PCI_Config OpRegion without driver, I haven't tried, >>>>>> but I wouldn't be surprised if it doesn't, taking in account that >>>>>> MS introduced _DSM doesn't. >>>>>> >>>>>>> >>>>>>> >>>>>>>>>> Just compare with a graphics card design, where on device memory >>>>>>>>>> is mapped directly at some GPA not wasting RAM that guest could >>>>>>>>>> use for other tasks. >>>>>>>>> >>>>>>>>> This might have been true 20 years ago. Most modern cards do DMA. >>>>>>>> >>>>>>>> Modern cards, with it's own RAM, map its VRAM in address space directly >>>>>>>> and allow users use it (GEM API). So they do not waste conventional RAM. >>>>>>>> For example NVIDIA VRAM is mapped as PCI BARs the same way like in this >>>>>>>> series (even PCI class id is the same) >>>>>>> >>>>>>> Don't know enough about graphics really, I'm not sure how these are >>>>>>> relevant. NICs and disks certainly do DMA. And virtio gl seems to >>>>>>> mostly use guest RAM, not on card RAM. >>>>>>> >>>>>>>>>> VMGENID and NVDIMM use-cases look to me exactly the same, i.e. >>>>>>>>>> instead of consuming guest's RAM they should be mapped at >>>>>>>>>> some GPA and their memory accessed directly. >>>>>>>>> >>>>>>>>> VMGENID is tied to a spec that rather arbitrarily asks for a fixed >>>>>>>>> address. This breaks the straight-forward approach of using a >>>>>>>>> rebalanceable PCI BAR. >>>>>>>> >>>>>>>> For PCI rebalance to work on Windows, one has to provide working PCI driver >>>>>>>> otherwise OS will ignore it when rebalancing happens and >>>>>>>> might map something else over ignored BAR. >>>>>>> >>>>>>> Does it disable the BAR then? Or just move it elsewhere? >>>>>> it doesn't, it just blindly ignores BARs existence and maps BAR of >>>>>> another device with driver over it. >>>>> >>>>> Interesting. On classical PCI this is a forbidden configuration. >>>>> Maybe we do something that confuses windows? >>>>> Could you tell me how to reproduce this behaviour? >>>> #cat > t << EOF >>>> pci_update_mappings_del >>>> pci_update_mappings_add >>>> EOF >>>> >>>> #./x86_64-softmmu/qemu-system-x86_64 -snapshot -enable-kvm -snapshot \ >>>> -monitor unix:/tmp/m,server,nowait -device pci-bridge,chassis_nr=1 \ >>>> -boot menu=on -m 4G -trace events=t ws2012r2x64dc.img \ >>>> -device ivshmem,id=foo,size=2M,shm,bus=pci.1,addr=01 >>>> >>>> wait till OS boots, note BARs programmed for ivshmem >>>> in my case it was >>>> 01:01.0 0,0xfe800000+0x100 >>>> then execute script and watch pci_update_mappings* trace events >>>> >>>> # for i in $(seq 3 18); do printf -- "device_add e1000,bus=pci.1,addr=%x\n" $i | nc -U /tmp/m; sleep 5; done; >>>> >>>> hotplugging e1000,bus=pci.1,addr=12 triggers rebalancing where >>>> Windows unmaps all BARs of nics on bridge but doesn't touch ivshmem >>>> and then programs new BARs, where: >>>> pci_update_mappings_add d=0x7fa02ff0cf90 01:11.0 0,0xfe800000+0x20000 >>>> creates overlapping BAR with ivshmem >>> >>> >>> Thanks! >>> We need to figure this out because currently this does not >>> work properly (or maybe it works, but merely by chance). >>> Me and Marcel will play with this. >>> >> >> I checked and indeed we have 2 separate problems: >> >> 1. ivshmem is declared as PCI RAM controller and Windows *does* have the drivers >> for it, however it is not remapped on re-balancing. > Does it really have a driver, i.e ivshmem specific one? > It should have its own driver or otherwise userspace > won't be able to access/work with it and it would be pointless > to add such device to machine. No, it does not. > >> You can see on Device Manage 2 working devices with the same MMIO region - strange! >> This may be because PCI RAM controllers can't be re-mapped? Even then, it should not be overridden. >> Maybe we need to add a clue to the OS in ACPI regarding this range? >> >> 2. PCI devices with no driver installed are not re-mapped. This can be OK >> from the Windows point of view because Resources Window does not show the MMIO range >> for this device. >> >> If the other (re-mapped) device is working, is pure luck. Both Memory Regions occupy the same range >> and have the same priority. >> >> We need to think about how to solve this. >> One way would be to defer the BAR activation to the guest OS, but I am not sure of the consequences. > deferring won't solve problem as rebalancing could happen later > and make BARs overlap. Why not? If we do not activate the BAR in firmware and Windows does not have a driver for it, will not activate it at all, right? Why would Windows activate the device BAR if it can't use it? At least this is what I hope. Any other idea would be appreciated. > I've noticed that at startup Windows unmaps and then maps BARs > at the same addresses where BIOS've put them before. Including devices without a working driver? Thanks, Marcel > >> And this does not solve the ivshmem problem. > So far the only way to avoid overlapping BARs due to Windows > doing rebalancing for driver-less devices is to pin such > BARs statically with _CRS in ACPI table but as Michael said > it fragments PCI address-space. > >> >> Thanks, >> Marcel >> >> >> >> >> >
On Tue, 16 Feb 2016 14:36:49 +0200 Marcel Apfelbaum <marcel@redhat.com> wrote: > On 02/16/2016 02:17 PM, Igor Mammedov wrote: > > On Tue, 16 Feb 2016 12:05:33 +0200 > > Marcel Apfelbaum <marcel@redhat.com> wrote: > > > >> On 02/11/2016 06:30 PM, Michael S. Tsirkin wrote: > >>> On Thu, Feb 11, 2016 at 04:16:05PM +0100, Igor Mammedov wrote: > >>>> On Tue, 9 Feb 2016 14:17:44 +0200 > >>>> "Michael S. Tsirkin" <mst@redhat.com> wrote: > >>>> > >>>>> On Tue, Feb 09, 2016 at 11:46:08AM +0100, Igor Mammedov wrote: > >>>>>>> So the linker interface solves this rather neatly: > >>>>>>> bios allocates memory, bios passes memory map to guest. > >>>>>>> Served us well for several years without need for extensions, > >>>>>>> and it does solve the VM GEN ID problem, even though > >>>>>>> 1. it was never designed for huge areas like nvdimm seems to want to use > >>>>>>> 2. we might want to add a new 64 bit flag to avoid touching low memory > >>>>>> linker interface is fine for some readonly data, like ACPI tables > >>>>>> especially fixed tables not so for AML ones is one wants to patch it. > >>>>>> > >>>>>> However now when you want to use it for other purposes you start > >>>>>> adding extensions and other guest->QEMU channels to communicate > >>>>>> patching info back. > >>>>>> It steals guest's memory which is also not nice and doesn't scale well. > >>>>> > >>>>> This is an argument I don't get. memory is memory. call it guest memory > >>>>> or RAM backed PCI BAR - same thing. MMIO is cheaper of course > >>>>> but much slower. > >>>>> > >>>>> ... > >>>> It however matters for user, he pays for guest with XXX RAM but gets less > >>>> than that. And that will be getting worse as a number of such devices > >>>> increases. > >>>> > >>>>>>> OK fine, but returning PCI BAR address to guest is wrong. > >>>>>>> How about reading it from ACPI then? Is it really > >>>>>>> broken unless there's *also* a driver? > >>>>>> I don't get question, MS Spec requires address (ADDR method), > >>>>>> and it's read by ACPI (AML). > >>>>> > >>>>> You were unhappy about DMA into guest memory. > >>>>> As a replacement for DMA, we could have AML read from > >>>>> e.g. PCI and write into RAM. > >>>>> This way we don't need to pass address to QEMU. > >>>> That sounds better as it saves us from allocation of IO port > >>>> and QEMU don't need to write into guest memory, the only question is > >>>> if PCI_Config opregion would work with driver-less PCI device. > >>> > >>> Or PCI BAR for that reason. I don't know for sure. > >>> > >>>> > >>>> And it's still pretty much not test-able since it would require > >>>> fully running OSPM to execute AML side. > >>> > >>> AML is not testable, but that's nothing new. > >>> You can test reading from PCI. > >>> > >>>>> > >>>>>> As for working PCI_Config OpRegion without driver, I haven't tried, > >>>>>> but I wouldn't be surprised if it doesn't, taking in account that > >>>>>> MS introduced _DSM doesn't. > >>>>>> > >>>>>>> > >>>>>>> > >>>>>>>>>> Just compare with a graphics card design, where on device memory > >>>>>>>>>> is mapped directly at some GPA not wasting RAM that guest could > >>>>>>>>>> use for other tasks. > >>>>>>>>> > >>>>>>>>> This might have been true 20 years ago. Most modern cards do DMA. > >>>>>>>> > >>>>>>>> Modern cards, with it's own RAM, map its VRAM in address space directly > >>>>>>>> and allow users use it (GEM API). So they do not waste conventional RAM. > >>>>>>>> For example NVIDIA VRAM is mapped as PCI BARs the same way like in this > >>>>>>>> series (even PCI class id is the same) > >>>>>>> > >>>>>>> Don't know enough about graphics really, I'm not sure how these are > >>>>>>> relevant. NICs and disks certainly do DMA. And virtio gl seems to > >>>>>>> mostly use guest RAM, not on card RAM. > >>>>>>> > >>>>>>>>>> VMGENID and NVDIMM use-cases look to me exactly the same, i.e. > >>>>>>>>>> instead of consuming guest's RAM they should be mapped at > >>>>>>>>>> some GPA and their memory accessed directly. > >>>>>>>>> > >>>>>>>>> VMGENID is tied to a spec that rather arbitrarily asks for a fixed > >>>>>>>>> address. This breaks the straight-forward approach of using a > >>>>>>>>> rebalanceable PCI BAR. > >>>>>>>> > >>>>>>>> For PCI rebalance to work on Windows, one has to provide working PCI driver > >>>>>>>> otherwise OS will ignore it when rebalancing happens and > >>>>>>>> might map something else over ignored BAR. > >>>>>>> > >>>>>>> Does it disable the BAR then? Or just move it elsewhere? > >>>>>> it doesn't, it just blindly ignores BARs existence and maps BAR of > >>>>>> another device with driver over it. > >>>>> > >>>>> Interesting. On classical PCI this is a forbidden configuration. > >>>>> Maybe we do something that confuses windows? > >>>>> Could you tell me how to reproduce this behaviour? > >>>> #cat > t << EOF > >>>> pci_update_mappings_del > >>>> pci_update_mappings_add > >>>> EOF > >>>> > >>>> #./x86_64-softmmu/qemu-system-x86_64 -snapshot -enable-kvm -snapshot \ > >>>> -monitor unix:/tmp/m,server,nowait -device pci-bridge,chassis_nr=1 \ > >>>> -boot menu=on -m 4G -trace events=t ws2012r2x64dc.img \ > >>>> -device ivshmem,id=foo,size=2M,shm,bus=pci.1,addr=01 > >>>> > >>>> wait till OS boots, note BARs programmed for ivshmem > >>>> in my case it was > >>>> 01:01.0 0,0xfe800000+0x100 > >>>> then execute script and watch pci_update_mappings* trace events > >>>> > >>>> # for i in $(seq 3 18); do printf -- "device_add e1000,bus=pci.1,addr=%x\n" $i | nc -U /tmp/m; sleep 5; done; > >>>> > >>>> hotplugging e1000,bus=pci.1,addr=12 triggers rebalancing where > >>>> Windows unmaps all BARs of nics on bridge but doesn't touch ivshmem > >>>> and then programs new BARs, where: > >>>> pci_update_mappings_add d=0x7fa02ff0cf90 01:11.0 0,0xfe800000+0x20000 > >>>> creates overlapping BAR with ivshmem > >>> > >>> > >>> Thanks! > >>> We need to figure this out because currently this does not > >>> work properly (or maybe it works, but merely by chance). > >>> Me and Marcel will play with this. > >>> > >> > >> I checked and indeed we have 2 separate problems: > >> > >> 1. ivshmem is declared as PCI RAM controller and Windows *does* have the drivers > >> for it, however it is not remapped on re-balancing. > > Does it really have a driver, i.e ivshmem specific one? > > It should have its own driver or otherwise userspace > > won't be able to access/work with it and it would be pointless > > to add such device to machine. > > No, it does not. so it's "PCI RAM controller", which is marked as NODRV in INF file, NODRV they use as a stub to prevent Windows asking for driver assuming that HW owns/manages device. And when rebalancing happens Windows completely ignores NODRV BARs which causes overlapping with devices that have PCI drivers. > > > > >> You can see on Device Manage 2 working devices with the same MMIO region - strange! > >> This may be because PCI RAM controllers can't be re-mapped? Even then, it should not be overridden. > >> Maybe we need to add a clue to the OS in ACPI regarding this range? > >> > >> 2. PCI devices with no driver installed are not re-mapped. This can be OK > >> from the Windows point of view because Resources Window does not show the MMIO range > >> for this device. > >> > >> If the other (re-mapped) device is working, is pure luck. Both Memory Regions occupy the same range > >> and have the same priority. > >> > >> We need to think about how to solve this. > >> One way would be to defer the BAR activation to the guest OS, but I am not sure of the consequences. > > deferring won't solve problem as rebalancing could happen later > > and make BARs overlap. > > Why not? If we do not activate the BAR in firmware and Windows does not have a driver > for it, will not activate it at all, right? > Why would Windows activate the device BAR if it can't use it? At least this is what I hope. > Any other idea would be appreciated. > > > > I've noticed that at startup Windows unmaps and then maps BARs > > at the same addresses where BIOS've put them before. > > Including devices without a working driver? I've just tried, it does so for ivshmem. > > > Thanks, > Marcel > > > > >> And this does not solve the ivshmem problem. > > So far the only way to avoid overlapping BARs due to Windows > > doing rebalancing for driver-less devices is to pin such > > BARs statically with _CRS in ACPI table but as Michael said > > it fragments PCI address-space. > > > >> > >> Thanks, > >> Marcel > >> > >> > >> > >> > >> > > > >
On Tue, Feb 16, 2016 at 02:51:25PM +0100, Igor Mammedov wrote: > On Tue, 16 Feb 2016 14:36:49 +0200 > Marcel Apfelbaum <marcel@redhat.com> wrote: > > > On 02/16/2016 02:17 PM, Igor Mammedov wrote: > > > On Tue, 16 Feb 2016 12:05:33 +0200 > > > Marcel Apfelbaum <marcel@redhat.com> wrote: > > > > > >> On 02/11/2016 06:30 PM, Michael S. Tsirkin wrote: > > >>> On Thu, Feb 11, 2016 at 04:16:05PM +0100, Igor Mammedov wrote: > > >>>> On Tue, 9 Feb 2016 14:17:44 +0200 > > >>>> "Michael S. Tsirkin" <mst@redhat.com> wrote: > > >>>> > > >>>>> On Tue, Feb 09, 2016 at 11:46:08AM +0100, Igor Mammedov wrote: > > >>>>>>> So the linker interface solves this rather neatly: > > >>>>>>> bios allocates memory, bios passes memory map to guest. > > >>>>>>> Served us well for several years without need for extensions, > > >>>>>>> and it does solve the VM GEN ID problem, even though > > >>>>>>> 1. it was never designed for huge areas like nvdimm seems to want to use > > >>>>>>> 2. we might want to add a new 64 bit flag to avoid touching low memory > > >>>>>> linker interface is fine for some readonly data, like ACPI tables > > >>>>>> especially fixed tables not so for AML ones is one wants to patch it. > > >>>>>> > > >>>>>> However now when you want to use it for other purposes you start > > >>>>>> adding extensions and other guest->QEMU channels to communicate > > >>>>>> patching info back. > > >>>>>> It steals guest's memory which is also not nice and doesn't scale well. > > >>>>> > > >>>>> This is an argument I don't get. memory is memory. call it guest memory > > >>>>> or RAM backed PCI BAR - same thing. MMIO is cheaper of course > > >>>>> but much slower. > > >>>>> > > >>>>> ... > > >>>> It however matters for user, he pays for guest with XXX RAM but gets less > > >>>> than that. And that will be getting worse as a number of such devices > > >>>> increases. > > >>>> > > >>>>>>> OK fine, but returning PCI BAR address to guest is wrong. > > >>>>>>> How about reading it from ACPI then? Is it really > > >>>>>>> broken unless there's *also* a driver? > > >>>>>> I don't get question, MS Spec requires address (ADDR method), > > >>>>>> and it's read by ACPI (AML). > > >>>>> > > >>>>> You were unhappy about DMA into guest memory. > > >>>>> As a replacement for DMA, we could have AML read from > > >>>>> e.g. PCI and write into RAM. > > >>>>> This way we don't need to pass address to QEMU. > > >>>> That sounds better as it saves us from allocation of IO port > > >>>> and QEMU don't need to write into guest memory, the only question is > > >>>> if PCI_Config opregion would work with driver-less PCI device. > > >>> > > >>> Or PCI BAR for that reason. I don't know for sure. > > >>> > > >>>> > > >>>> And it's still pretty much not test-able since it would require > > >>>> fully running OSPM to execute AML side. > > >>> > > >>> AML is not testable, but that's nothing new. > > >>> You can test reading from PCI. > > >>> > > >>>>> > > >>>>>> As for working PCI_Config OpRegion without driver, I haven't tried, > > >>>>>> but I wouldn't be surprised if it doesn't, taking in account that > > >>>>>> MS introduced _DSM doesn't. > > >>>>>> > > >>>>>>> > > >>>>>>> > > >>>>>>>>>> Just compare with a graphics card design, where on device memory > > >>>>>>>>>> is mapped directly at some GPA not wasting RAM that guest could > > >>>>>>>>>> use for other tasks. > > >>>>>>>>> > > >>>>>>>>> This might have been true 20 years ago. Most modern cards do DMA. > > >>>>>>>> > > >>>>>>>> Modern cards, with it's own RAM, map its VRAM in address space directly > > >>>>>>>> and allow users use it (GEM API). So they do not waste conventional RAM. > > >>>>>>>> For example NVIDIA VRAM is mapped as PCI BARs the same way like in this > > >>>>>>>> series (even PCI class id is the same) > > >>>>>>> > > >>>>>>> Don't know enough about graphics really, I'm not sure how these are > > >>>>>>> relevant. NICs and disks certainly do DMA. And virtio gl seems to > > >>>>>>> mostly use guest RAM, not on card RAM. > > >>>>>>> > > >>>>>>>>>> VMGENID and NVDIMM use-cases look to me exactly the same, i.e. > > >>>>>>>>>> instead of consuming guest's RAM they should be mapped at > > >>>>>>>>>> some GPA and their memory accessed directly. > > >>>>>>>>> > > >>>>>>>>> VMGENID is tied to a spec that rather arbitrarily asks for a fixed > > >>>>>>>>> address. This breaks the straight-forward approach of using a > > >>>>>>>>> rebalanceable PCI BAR. > > >>>>>>>> > > >>>>>>>> For PCI rebalance to work on Windows, one has to provide working PCI driver > > >>>>>>>> otherwise OS will ignore it when rebalancing happens and > > >>>>>>>> might map something else over ignored BAR. > > >>>>>>> > > >>>>>>> Does it disable the BAR then? Or just move it elsewhere? > > >>>>>> it doesn't, it just blindly ignores BARs existence and maps BAR of > > >>>>>> another device with driver over it. > > >>>>> > > >>>>> Interesting. On classical PCI this is a forbidden configuration. > > >>>>> Maybe we do something that confuses windows? > > >>>>> Could you tell me how to reproduce this behaviour? > > >>>> #cat > t << EOF > > >>>> pci_update_mappings_del > > >>>> pci_update_mappings_add > > >>>> EOF > > >>>> > > >>>> #./x86_64-softmmu/qemu-system-x86_64 -snapshot -enable-kvm -snapshot \ > > >>>> -monitor unix:/tmp/m,server,nowait -device pci-bridge,chassis_nr=1 \ > > >>>> -boot menu=on -m 4G -trace events=t ws2012r2x64dc.img \ > > >>>> -device ivshmem,id=foo,size=2M,shm,bus=pci.1,addr=01 > > >>>> > > >>>> wait till OS boots, note BARs programmed for ivshmem > > >>>> in my case it was > > >>>> 01:01.0 0,0xfe800000+0x100 > > >>>> then execute script and watch pci_update_mappings* trace events > > >>>> > > >>>> # for i in $(seq 3 18); do printf -- "device_add e1000,bus=pci.1,addr=%x\n" $i | nc -U /tmp/m; sleep 5; done; > > >>>> > > >>>> hotplugging e1000,bus=pci.1,addr=12 triggers rebalancing where > > >>>> Windows unmaps all BARs of nics on bridge but doesn't touch ivshmem > > >>>> and then programs new BARs, where: > > >>>> pci_update_mappings_add d=0x7fa02ff0cf90 01:11.0 0,0xfe800000+0x20000 > > >>>> creates overlapping BAR with ivshmem > > >>> > > >>> > > >>> Thanks! > > >>> We need to figure this out because currently this does not > > >>> work properly (or maybe it works, but merely by chance). > > >>> Me and Marcel will play with this. > > >>> > > >> > > >> I checked and indeed we have 2 separate problems: > > >> > > >> 1. ivshmem is declared as PCI RAM controller and Windows *does* have the drivers > > >> for it, however it is not remapped on re-balancing. > > > Does it really have a driver, i.e ivshmem specific one? > > > It should have its own driver or otherwise userspace > > > won't be able to access/work with it and it would be pointless > > > to add such device to machine. > > > > No, it does not. > so it's "PCI RAM controller", which is marked as NODRV in INF file, > NODRV they use as a stub to prevent Windows asking for driver assuming > that HW owns/manages device. > And when rebalancing happens Windows completely ignores NODRV > BARs which causes overlapping with devices that have PCI drivers. But that can't work for classic pci: if BARs overlap, behaviour is undefined. We do something that windows does not expect that makes it create this setup. Is this something enabling BARs in fimware? Something else? > > > > > > > >> You can see on Device Manage 2 working devices with the same MMIO region - strange! > > >> This may be because PCI RAM controllers can't be re-mapped? Even then, it should not be overridden. > > >> Maybe we need to add a clue to the OS in ACPI regarding this range? > > >> > > >> 2. PCI devices with no driver installed are not re-mapped. This can be OK > > >> from the Windows point of view because Resources Window does not show the MMIO range > > >> for this device. > > >> > > >> If the other (re-mapped) device is working, is pure luck. Both Memory Regions occupy the same range > > >> and have the same priority. > > >> > > >> We need to think about how to solve this. > > >> One way would be to defer the BAR activation to the guest OS, but I am not sure of the consequences. > > > deferring won't solve problem as rebalancing could happen later > > > and make BARs overlap. > > > > Why not? If we do not activate the BAR in firmware and Windows does not have a driver > > for it, will not activate it at all, right? > > Why would Windows activate the device BAR if it can't use it? At least this is what I hope. > > Any other idea would be appreciated. > > > > > > > I've noticed that at startup Windows unmaps and then maps BARs > > > at the same addresses where BIOS've put them before. > > > > Including devices without a working driver? > I've just tried, it does so for ivshmem. > > > > > > > Thanks, > > Marcel > > > > > > > >> And this does not solve the ivshmem problem. > > > So far the only way to avoid overlapping BARs due to Windows > > > doing rebalancing for driver-less devices is to pin such > > > BARs statically with _CRS in ACPI table but as Michael said > > > it fragments PCI address-space. > > > > > >> > > >> Thanks, > > >> Marcel > > >> > > >> > > >> > > >> > > >> > > > > > > >
On Tue, Feb 16, 2016 at 02:36:49PM +0200, Marcel Apfelbaum wrote: > >>2. PCI devices with no driver installed are not re-mapped. This can be OK > >> from the Windows point of view because Resources Window does not show the MMIO range > >> for this device. > >> > >> If the other (re-mapped) device is working, is pure luck. Both Memory Regions occupy the same range > >> and have the same priority. > >> > >>We need to think about how to solve this. > >>One way would be to defer the BAR activation to the guest OS, but I am not sure of the consequences. > >deferring won't solve problem as rebalancing could happen later > >and make BARs overlap. > > Why not? If we do not activate the BAR in firmware and Windows does not have a driver > for it, will not activate it at all, right? > Why would Windows activate the device BAR if it can't use it? At least this is what I hope. > Any other idea would be appreciated. > I wonder whether this is related to setting PnP in CMOS. See e.g. http://oss.sgi.com/LDP/HOWTO/Plug-and-Play-HOWTO-4.html and https://support.microsoft.com/en-us/kb/321779 > >I've noticed that at startup Windows unmaps and then maps BARs > >at the same addresses where BIOS've put them before. > > Including devices without a working driver? > > > Thanks, > Marcel > > > > >>And this does not solve the ivshmem problem. > >So far the only way to avoid overlapping BARs due to Windows > >doing rebalancing for driver-less devices is to pin such > >BARs statically with _CRS in ACPI table but as Michael said > >it fragments PCI address-space. > > > >> > >>Thanks, > >>Marcel > >> > >> > >> > >> > >> > >
diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak index b177e52..6402439 100644 --- a/default-configs/i386-softmmu.mak +++ b/default-configs/i386-softmmu.mak @@ -51,6 +51,7 @@ CONFIG_APIC=y CONFIG_IOAPIC=y CONFIG_PVPANIC=y CONFIG_MEM_HOTPLUG=y +CONFIG_VMGENID=y CONFIG_NVDIMM=y CONFIG_ACPI_NVDIMM=y CONFIG_XIO3130=y diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak index 6e3b312..fdac18f 100644 --- a/default-configs/x86_64-softmmu.mak +++ b/default-configs/x86_64-softmmu.mak @@ -51,6 +51,7 @@ CONFIG_APIC=y CONFIG_IOAPIC=y CONFIG_PVPANIC=y CONFIG_MEM_HOTPLUG=y +CONFIG_VMGENID=y CONFIG_NVDIMM=y CONFIG_ACPI_NVDIMM=y CONFIG_XIO3130=y diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt index 0adcb89..e65ecf9 100644 --- a/docs/specs/pci-ids.txt +++ b/docs/specs/pci-ids.txt @@ -47,6 +47,7 @@ PCI devices (other than virtio): 1b36:0005 PCI test device (docs/specs/pci-testdev.txt) 1b36:0006 PCI Rocker Ethernet switch device 1b36:0007 PCI SD Card Host Controller Interface (SDHCI) +1b36:0009 PCI VM-Generation device 1b36:000a PCI-PCI bridge (multiseat) All these devices are documented in docs/specs. diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c index 78758e2..0187262 100644 --- a/hw/i386/acpi-build.c +++ b/hw/i386/acpi-build.c @@ -44,6 +44,7 @@ #include "hw/acpi/tpm.h" #include "sysemu/tpm_backend.h" #include "hw/timer/mc146818rtc_regs.h" +#include "hw/misc/vmgenid.h" /* Supported chipsets: */ #include "hw/acpi/piix4.h" @@ -237,6 +238,40 @@ static void acpi_get_misc_info(AcpiMiscInfo *info) info->applesmc_io_base = applesmc_port(); } +static Aml *build_vmgenid_device(uint64_t buf_paddr) +{ + Aml *dev, *pkg, *crs; + + dev = aml_device("VGEN"); + aml_append(dev, aml_name_decl("_HID", aml_string("QEMU0003"))); + aml_append(dev, aml_name_decl("_CID", aml_string("VM_Gen_Counter"))); + aml_append(dev, aml_name_decl("_DDN", aml_string("VM_Gen_Counter"))); + + pkg = aml_package(2); + /* low 32 bits of UUID buffer addr */ + aml_append(pkg, aml_int(buf_paddr & 0xFFFFFFFFUL)); + /* high 32 bits of UUID buffer addr */ + aml_append(pkg, aml_int(buf_paddr >> 32)); + aml_append(dev, aml_name_decl("ADDR", pkg)); + + /* + * VMGEN device has class_id PCI_CLASS_MEMORY_RAM and Windows + * displays it as "PCI RAM controller" which is marked as NO_DRV + * so Windows ignores VMGEN device completely and doesn't check + * for resource conflicts which during PCI rebalancing can lead + * to another PCI device claiming ignored BARs. To prevent this + * statically reserve resources used by VM_Gen_Counter. + * For more verbose comment see this commit message. + */ + crs = aml_resource_template(); + aml_append(crs, aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, + AML_MAX_FIXED, AML_CACHEABLE, AML_READ_WRITE, 0, + buf_paddr, buf_paddr + VMGENID_VMGID_BUF_SIZE - 1, 0, + VMGENID_VMGID_BUF_SIZE)); + aml_append(dev, aml_name_decl("_CRS", crs)); + return dev; +} + /* * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE. * On i386 arch we only have two pci hosts, so we can look only for them. @@ -2171,6 +2206,7 @@ build_ssdt(GArray *table_data, GArray *linker, } if (bus) { + Object *vmgen; Aml *scope = aml_scope("PCI0"); /* Scan all PCI buses. Generate tables to support hotplug. */ build_append_pci_bus_devices(scope, bus, pm->pcihp_bridge_en); @@ -2187,6 +2223,24 @@ build_ssdt(GArray *table_data, GArray *linker, aml_append(scope, dev); } + vmgen = find_vmgneid_dev(NULL); + if (vmgen) { + PCIDevice *pdev = PCI_DEVICE(vmgen); + uint64_t buf_paddr = + pci_get_bar_addr(pdev, VMGENID_VMGID_BUF_BAR); + + if (buf_paddr != PCI_BAR_UNMAPPED) { + aml_append(scope, build_vmgenid_device(buf_paddr)); + + method = aml_method("\\_GPE._E00", 0, + AML_NOTSERIALIZED); + aml_append(method, + aml_notify(aml_name("\\_SB.PCI0.VGEN"), + aml_int(0x80))); + aml_append(ssdt, method); + } + } + aml_append(sb_scope, scope); } } @@ -2489,8 +2543,6 @@ build_dsdt(GArray *table_data, GArray *linker, { aml_append(scope, aml_name_decl("_HID", aml_string("ACPI0006"))); - aml_append(scope, aml_method("_L00", 0, AML_NOTSERIALIZED)); - if (misc->is_piix4) { method = aml_method("_E01", 0, AML_NOTSERIALIZED); aml_append(method, diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs index d4765c2..1f05edd 100644 --- a/hw/misc/Makefile.objs +++ b/hw/misc/Makefile.objs @@ -43,4 +43,5 @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o obj-$(CONFIG_PVPANIC) += pvpanic.o obj-$(CONFIG_EDU) += edu.o +obj-$(CONFIG_VMGENID) += vmgenid.o obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o diff --git a/hw/misc/vmgenid.c b/hw/misc/vmgenid.c new file mode 100644 index 0000000..a2fbdfc --- /dev/null +++ b/hw/misc/vmgenid.c @@ -0,0 +1,154 @@ +/* + * Virtual Machine Generation ID Device + * + * Copyright (C) 2016 Red Hat Inc. + * + * Authors: Gal Hammer <ghammer@redhat.com> + * Igor Mammedov <imammedo@redhat.com> + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + * + */ + +#include "hw/i386/pc.h" +#include "hw/pci/pci.h" +#include "hw/misc/vmgenid.h" +#include "hw/acpi/acpi.h" +#include "qapi/visitor.h" + +#define VMGENID(obj) OBJECT_CHECK(VmGenIdState, (obj), VMGENID_DEVICE) + +typedef struct VmGenIdState { + PCIDevice parent_obj; + MemoryRegion iomem; + union { + uint8_t guid[16]; + uint8_t guid_page[VMGENID_VMGID_BUF_SIZE]; + }; + bool guid_set; +} VmGenIdState; + +Object *find_vmgneid_dev(Error **errp) +{ + Object *obj = object_resolve_path_type("", VMGENID_DEVICE, NULL); + if (!obj) { + error_setg(errp, VMGENID_DEVICE " is not found"); + } + return obj; +} + +static void vmgenid_update_guest(VmGenIdState *s) +{ + Object *acpi_obj; + void *ptr = memory_region_get_ram_ptr(&s->iomem); + + memcpy(ptr, &s->guid, sizeof(s->guid)); + memory_region_set_dirty(&s->iomem, 0, sizeof(s->guid)); + + acpi_obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL); + if (acpi_obj) { + AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_obj); + AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_obj); + ACPIREGS *acpi_regs = adevc->regs(adev); + + acpi_regs->gpe.sts[0] |= 1; /* _GPE.E00 handler */ + acpi_update_sci(acpi_regs, adevc->sci(adev)); + } +} + +static void vmgenid_set_guid(Object *obj, const char *value, Error **errp) +{ + VmGenIdState *s = VMGENID(obj); + + if (qemu_uuid_parse(value, s->guid) < 0) { + error_setg(errp, "'%s." VMGENID_GUID + "': Failed to parse GUID string: %s", + object_get_typename(OBJECT(s)), + value); + return; + } + + s->guid_set = true; + vmgenid_update_guest(s); +} + +static void vmgenid_get_vmgid_addr(Object *obj, Visitor *v, void *opaque, + const char *name, Error **errp) +{ + int64_t value = pci_get_bar_addr(PCI_DEVICE(obj), 0); + + if (value == PCI_BAR_UNMAPPED) { + error_setg(errp, "'%s." VMGENID_VMGID_BUF_ADDR "': not initialized", + object_get_typename(OBJECT(obj))); + return; + } + visit_type_int(v, &value, name, errp); +} + +static void vmgenid_initfn(Object *obj) +{ + VmGenIdState *s = VMGENID(obj); + + memory_region_init_ram(&s->iomem, obj, "vgid.bar", sizeof(s->guid_page), + &error_abort); + + object_property_add_str(obj, VMGENID_GUID, NULL, vmgenid_set_guid, NULL); + object_property_add(obj, VMGENID_VMGID_BUF_ADDR, "int", + vmgenid_get_vmgid_addr, NULL, NULL, NULL, NULL); +} + + +static void vmgenid_realize(PCIDevice *dev, Error **errp) +{ + VmGenIdState *s = VMGENID(dev); + bool ambiguous = false; + + object_resolve_path_type("", VMGENID_DEVICE, &ambiguous); + if (ambiguous) { + error_setg(errp, "no more than one " VMGENID_DEVICE + " device is permitted"); + return; + } + + if (!s->guid_set) { + error_setg(errp, "'%s." VMGENID_GUID "' property is not set", + object_get_typename(OBJECT(s))); + return; + } + + vmstate_register_ram(&s->iomem, DEVICE(s)); + pci_register_bar(PCI_DEVICE(s), VMGENID_VMGID_BUF_BAR, + PCI_BASE_ADDRESS_MEM_PREFETCH | + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, + &s->iomem); + return; +} + +static void vmgenid_class_init(ObjectClass *klass, void *data) +{ + DeviceClass *dc = DEVICE_CLASS(klass); + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); + + set_bit(DEVICE_CATEGORY_MISC, dc->categories); + dc->hotpluggable = false; + k->realize = vmgenid_realize; + k->vendor_id = PCI_VENDOR_ID_REDHAT; + k->device_id = PCI_DEVICE_ID_REDHAT_VMGENID; + k->class_id = PCI_CLASS_MEMORY_RAM; +} + +static const TypeInfo vmgenid_device_info = { + .name = VMGENID_DEVICE, + .parent = TYPE_PCI_DEVICE, + .instance_size = sizeof(VmGenIdState), + .instance_init = vmgenid_initfn, + .class_init = vmgenid_class_init, +}; + +static void vmgenid_register_types(void) +{ + type_register_static(&vmgenid_device_info); +} + +type_init(vmgenid_register_types) diff --git a/include/hw/misc/vmgenid.h b/include/hw/misc/vmgenid.h new file mode 100644 index 0000000..b90882c --- /dev/null +++ b/include/hw/misc/vmgenid.h @@ -0,0 +1,27 @@ +/* + * Virtual Machine Generation ID Device + * + * Copyright (C) 2016 Red Hat Inc. + * + * Authors: Gal Hammer <ghammer@redhat.com> + * Igor Mammedov <imammedo@redhat.com> + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef HW_MISC_VMGENID_H +#define HW_MISC_VMGENID_H + +#include "qom/object.h" + +#define VMGENID_DEVICE "vmgenid" +#define VMGENID_GUID "guid" +#define VMGENID_VMGID_BUF_ADDR "vmgid-addr" +#define VMGENID_VMGID_BUF_SIZE 0x1000 +#define VMGENID_VMGID_BUF_BAR 0 + +Object *find_vmgneid_dev(Error **errp); + +#endif diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h index dedf277..f4c9d48 100644 --- a/include/hw/pci/pci.h +++ b/include/hw/pci/pci.h @@ -94,6 +94,7 @@ #define PCI_DEVICE_ID_REDHAT_PXB 0x0009 #define PCI_DEVICE_ID_REDHAT_BRIDGE_SEAT 0x000a #define PCI_DEVICE_ID_REDHAT_PXB_PCIE 0x000b +#define PCI_DEVICE_ID_REDHAT_VMGENID 0x000c #define PCI_DEVICE_ID_REDHAT_QXL 0x0100 #define FMT_PCIBUS PRIx64