Message ID | 1476448259-99631-1-git-send-email-wei.w.wang@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 14/10/2016 14:30, Wei Wang wrote: > PV interrupts (PVI) enables a guest to send interrupts to another via > hypercalls. > > Signed-off-by: Wei Wang <wei.w.wang@intel.com> > --- > pv_interrupt_controller.c | 27 +++++++++++++++++++++++++++ > 1 file changed, 27 insertions(+) > create mode 100644 pv_interrupt_controller.c > > diff --git a/pv_interrupt_controller.c b/pv_interrupt_controller.c > new file mode 100644 > index 0000000..5f2431d > --- /dev/null > +++ b/pv_interrupt_controller.c > @@ -0,0 +1,27 @@ > + > +The pv interrupt (PVI) hypercall is proposed to support one guest sending > +interrupts to another guest using hypercalls. The following pseduocode shows how > +a PVI is sent from the guest: > + > +#define KVM_HC_PVI 9 > +kvm_hypercall2(KVM_HC_PVI, guest_uuid, guest_gsi); > + > +The new hypercall number, KVM_HC_PVI, is used for the purpose of sending PVIs. > +guest_uuid is used to identify the guest that the interrupt will be sent to. > +guest_gsi identifies the interrupt source of that guest. > + > +The PVI hypercall handler in KVM iterates the VM list (the vm_list field in > +the kvm struct), finds the guest with the passed guest_uuid, and injects an > +interrupt to the guest with the guest_gsi number. > + > +Finally, it's about the permission of sending PVI from one guest to another. > +In the PVI setup phase, the PVI receiver should get the sender's UUID (e.g. via > +the vhost-user protocol extension implemented between QEMUs), and pass it to KVM. > +Two new fields will be added to the struct kvm{ }: > + > ++uuid_t uuid; // the guest uuid > ++uuid_t pvi_sender_uuid[MAX_NUM]; // the sender's uuid should be registered here > + > +PVI will not be injected to the receiver guest if the sender's uuid does not appear > +in the receiver's pvi_sender_uuid table. > + > Why would you do that instead of just using the local APIC?... Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
T24gRnJpZGF5LCBPY3RvYmVyIDE0LCAyMDE2IDg6NTkgUE0sIFBhb2xvIEJvbnppbmkgd3JvdGU6 DQo+IE9uIDE0LzEwLzIwMTYgMTQ6MzAsIFdlaSBXYW5nIHdyb3RlOg0KPiA+IFBWIGludGVycnVw dHMgKFBWSSkgZW5hYmxlcyBhIGd1ZXN0IHRvIHNlbmQgaW50ZXJydXB0cyB0byBhbm90aGVyIHZp YQ0KPiA+IGh5cGVyY2FsbHMuDQo+ID4NCj4gPiBTaWduZWQtb2ZmLWJ5OiBXZWkgV2FuZyA8d2Vp Lncud2FuZ0BpbnRlbC5jb20+DQo+ID4gLS0tDQo+ID4gIHB2X2ludGVycnVwdF9jb250cm9sbGVy LmMgfCAyNyArKysrKysrKysrKysrKysrKysrKysrKysrKysNCj4gPiAgMSBmaWxlIGNoYW5nZWQs IDI3IGluc2VydGlvbnMoKykNCj4gPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHB2X2ludGVycnVwdF9j b250cm9sbGVyLmMNCj4gPg0KPiA+IGRpZmYgLS1naXQgYS9wdl9pbnRlcnJ1cHRfY29udHJvbGxl ci5jIGIvcHZfaW50ZXJydXB0X2NvbnRyb2xsZXIuYyBuZXcNCj4gPiBmaWxlIG1vZGUgMTAwNjQ0 IGluZGV4IDAwMDAwMDAuLjVmMjQzMWQNCj4gPiAtLS0gL2Rldi9udWxsDQo+ID4gKysrIGIvcHZf aW50ZXJydXB0X2NvbnRyb2xsZXIuYw0KPiA+IEBAIC0wLDAgKzEsMjcgQEANCj4gPiArDQo+ID4g K1RoZSBwdiBpbnRlcnJ1cHQgKFBWSSkgaHlwZXJjYWxsIGlzIHByb3Bvc2VkIHRvIHN1cHBvcnQg b25lIGd1ZXN0DQo+ID4gK3NlbmRpbmcgaW50ZXJydXB0cyB0byBhbm90aGVyIGd1ZXN0IHVzaW5n IGh5cGVyY2FsbHMuIFRoZSBmb2xsb3dpbmcNCj4gPiArcHNlZHVvY29kZSBzaG93cyBob3cgYSBQ VkkgaXMgc2VudCBmcm9tIHRoZSBndWVzdDoNCj4gPiArDQo+ID4gKyNkZWZpbmUgS1ZNX0hDX1BW SSA5DQo+ID4gK2t2bV9oeXBlcmNhbGwyKEtWTV9IQ19QVkksIGd1ZXN0X3V1aWQsIGd1ZXN0X2dz aSk7DQo+ID4gKw0KPiA+ICtUaGUgbmV3IGh5cGVyY2FsbCBudW1iZXIsIEtWTV9IQ19QVkksIGlz IHVzZWQgZm9yIHRoZSBwdXJwb3NlIG9mIHNlbmRpbmcNCj4gUFZJcy4NCj4gPiArZ3Vlc3RfdXVp ZCBpcyB1c2VkIHRvIGlkZW50aWZ5IHRoZSBndWVzdCB0aGF0IHRoZSBpbnRlcnJ1cHQgd2lsbCBi ZSBzZW50IHRvLg0KPiA+ICtndWVzdF9nc2kgaWRlbnRpZmllcyB0aGUgaW50ZXJydXB0IHNvdXJj ZSBvZiB0aGF0IGd1ZXN0Lg0KPiA+ICsNCj4gPiArVGhlIFBWSSBoeXBlcmNhbGwgaGFuZGxlciBp biBLVk0gaXRlcmF0ZXMgdGhlIFZNIGxpc3QgKHRoZSB2bV9saXN0DQo+ID4gK2ZpZWxkIGluIHRo ZSBrdm0gc3RydWN0KSwgZmluZHMgdGhlIGd1ZXN0IHdpdGggdGhlIHBhc3NlZCBndWVzdF91dWlk LA0KPiA+ICthbmQgaW5qZWN0cyBhbiBpbnRlcnJ1cHQgdG8gdGhlIGd1ZXN0IHdpdGggdGhlIGd1 ZXN0X2dzaSBudW1iZXIuDQo+ID4gKw0KPiA+ICtGaW5hbGx5LCBpdCdzIGFib3V0IHRoZSBwZXJt aXNzaW9uIG9mIHNlbmRpbmcgUFZJIGZyb20gb25lIGd1ZXN0IHRvIGFub3RoZXIuDQo+ID4gK0lu IHRoZSBQVkkgc2V0dXAgcGhhc2UsIHRoZSBQVkkgcmVjZWl2ZXIgc2hvdWxkIGdldCB0aGUgc2Vu ZGVyJ3MgVVVJRA0KPiA+ICsoZS5nLiB2aWEgdGhlIHZob3N0LXVzZXIgcHJvdG9jb2wgZXh0ZW5z aW9uIGltcGxlbWVudGVkIGJldHdlZW4gUUVNVXMpLA0KPiBhbmQgcGFzcyBpdCB0byBLVk0uDQo+ ID4gK1R3byBuZXcgZmllbGRzIHdpbGwgYmUgYWRkZWQgdG8gdGhlIHN0cnVjdCBrdm17IH06DQo+ ID4gKw0KPiA+ICsrdXVpZF90IHV1aWQ7IC8vIHRoZSBndWVzdCB1dWlkDQo+ID4gKyt1dWlkX3Qg cHZpX3NlbmRlcl91dWlkW01BWF9OVU1dOyAvLyB0aGUgc2VuZGVyJ3MgdXVpZCBzaG91bGQgYmUN Cj4gPiArK3JlZ2lzdGVyZWQgaGVyZQ0KPiA+ICsNCj4gPiArUFZJIHdpbGwgbm90IGJlIGluamVj dGVkIHRvIHRoZSByZWNlaXZlciBndWVzdCBpZiB0aGUgc2VuZGVyJ3MgdXVpZA0KPiA+ICtkb2Vz IG5vdCBhcHBlYXIgaW4gdGhlIHJlY2VpdmVyJ3MgcHZpX3NlbmRlcl91dWlkIHRhYmxlLg0KPiA+ ICsNCj4gPg0KPiANCj4gV2h5IHdvdWxkIHlvdSBkbyB0aGF0IGluc3RlYWQgb2YganVzdCB1c2lu ZyB0aGUgbG9jYWwgQVBJQz8uLi4NCj4gDQoNClRoZSBpbnRlcnJ1cHQgd2lsbCBiZSBkZWxpdmVy ZWQgdG8gTEFQSUMgLSB0aGUgaHlwZXJjYWxsIGhhbmRlciBpbmplY3RzIHRoZSBpbnRlcnJ1cHQg dmlhIGt2bV9zZXRfaXJxKGt2bSwgR1NJLC4uKSwgd2hpY2ggZmluYWxseSB1c2VzIExBUElDLCBy aWdodD8NCg0KQmVzdCwNCldlaQ0KDQoNCg0K -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 14/10/2016 16:00, Wang, Wei W wrote: >> Why would you do that instead of just using the local APIC?... > > The interrupt will be delivered to LAPIC - the hypercall hander > injects the interrupt via kvm_set_irq(kvm, GSI,..), which finally > uses LAPIC, right? But why do you need that? You can just deliver it to the appropriate local APIC interrupt, there's no need to know the GSI. The guest knows how it has configured the GSIs. You haven't explained the use case. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
DQpPbiBGcmlkYXksIE9jdG9iZXIgMTQsIDIwMTYgMTA6MTMgUE0sIFBhb2xvIEJvbnppbmkgd3Jv dGU6DQo+IE9uIDE0LzEwLzIwMTYgMTY6MDAsIFdhbmcsIFdlaSBXIHdyb3RlOg0KPiA+PiBXaHkg d291bGQgeW91IGRvIHRoYXQgaW5zdGVhZCBvZiBqdXN0IHVzaW5nIHRoZSBsb2NhbCBBUElDPy4u Lg0KPiA+DQo+ID4gVGhlIGludGVycnVwdCB3aWxsIGJlIGRlbGl2ZXJlZCB0byBMQVBJQyAtIHRo ZSBoeXBlcmNhbGwgaGFuZGVyDQo+ID4gaW5qZWN0cyB0aGUgaW50ZXJydXB0IHZpYSBrdm1fc2V0 X2lycShrdm0sIEdTSSwuLiksIHdoaWNoIGZpbmFsbHkgdXNlcw0KPiA+IExBUElDLCByaWdodD8N Cj4gDQo+IEJ1dCB3aHkgZG8geW91IG5lZWQgdGhhdD8gIFlvdSBjYW4ganVzdCBkZWxpdmVyIGl0 IHRvIHRoZSBhcHByb3ByaWF0ZSBsb2NhbCBBUElDDQo+IGludGVycnVwdCwgdGhlcmUncyBubyBu ZWVkIHRvIGtub3cgdGhlIEdTSS4gIFRoZSBndWVzdCBrbm93cyBob3cgaXQgaGFzDQo+IGNvbmZp Z3VyZWQgdGhlIEdTSXMuDQo+IA0KPiBZb3UgaGF2ZW4ndCBleHBsYWluZWQgdGhlIHVzZSBjYXNl Lg0KDQpTdXJlLiBPbmUgZXhhbXBsZSBoZXJlIGlzIHRvIHNlbmQgYW4gaW50ZXJydXB0IGZyb20g YSB2aXJ0aW8gZHJpdmVyIChlLmcuIHRoZSB2aG9zdC1wY2ktbmV0IHRoYXQgd2UgYXJlIHdvcmtp bmcgb24pIG9uIGEgZ3Vlc3QgdG8gYSB2aXJ0aW8tbmV0IGRldmljZSBvbiBhbm90aGVyIGd1ZXN0 LiBJbiB0ZXJtcyBvZiBpbmplY3RpbmcgYW4gaW50ZXJydXB0IHRvIHRoZSB2aXJ0aW8tbmV0IGRl dmljZSwgc2hvdWxkIHdlIGdpdmUgdGhlIHNlbmRlciB0aGUgcmVsYXRlZCBHU0kgYXNzaWduZWQg dG8gdGhlIHZpcnRpby1uZXQgZGV2aWNlIChpLmUuIHRoZSBHU0kgb2YgYW4gUlggcXVldWUsIHRv IG5vdGlmeSB0aGUgdmlydGlvLW5ldCBkcml2ZXIgdG8gcmVjZWl2ZSBwYWNrZXRzIGZyb20gdGhh dCBSWCBxdWV1ZSk/ICANCg0KQ2FuIHlvdSBwbGVhc2UgZXhwbGFpbiBtb3JlIGFib3V0ICJqdXN0 IGRlbGl2ZXJpbmcgaXQgdG8gdGhlIGFwcHJvcHJpYXRlIGxvY2FsIEFQSUMiPyAgV2hhdCB3b3Vs ZCBiZSBzb3VyY2Ugb2YgdGhlIGludGVycnVwdCB0aGF0IHdlIGFyZSBpbmplY3RpbmcgdG8/IFRo YW5rcy4NCg0KQmVzdCwNCldlaQ0K -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 14/10/2016 16:56, Wang, Wei W wrote: > > On Friday, October 14, 2016 10:13 PM, Paolo Bonzini wrote: >> On 14/10/2016 16:00, Wang, Wei W wrote: >>>> Why would you do that instead of just using the local APIC?... >>> >>> The interrupt will be delivered to LAPIC - the hypercall hander >>> injects the interrupt via kvm_set_irq(kvm, GSI,..), which finally >>> uses LAPIC, right? >> >> But why do you need that? You can just deliver it to the >> appropriate local APIC interrupt, there's no need to know the GSI. >> The guest knows how it has configured the GSIs. >> >> You haven't explained the use case. > > Sure. One example here is to send an interrupt from a virtio driver > (e.g. the vhost-pci-net that we are working on) on a guest to a > virtio-net device on another guest. In terms of injecting an > interrupt to the virtio-net device, should we give the sender the > related GSI assigned to the virtio-net device (i.e. the GSI of an RX > queue, to notify the virtio-net driver to receive packets from that > RX queue)? In terms of vhost-pci, a write to an MMIO register on the vhost side (the guest->host doorbell) would trigger an irq on the virtio side (the host->guest doorbell). There is no need to know GSIs, they are entirely hidden in QEMU. Paolo > Can you please explain more about "just delivering it to the > appropriate local APIC"? What would be source of the interrupt that > we are injecting to? Thanks. > > Best, Wei > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Friday, October 14, 2016 11:08 PM, Paolo Bonzini wrote: > On 14/10/2016 16:56, Wang, Wei W wrote: > > > > On Friday, October 14, 2016 10:13 PM, Paolo Bonzini wrote: > >> On 14/10/2016 16:00, Wang, Wei W wrote: > >>>> Why would you do that instead of just using the local APIC?... > >>> > >>> The interrupt will be delivered to LAPIC - the hypercall hander > >>> injects the interrupt via kvm_set_irq(kvm, GSI,..), which finally > >>> uses LAPIC, right? > >> > >> But why do you need that? You can just deliver it to the appropriate > >> local APIC interrupt, there's no need to know the GSI. > >> The guest knows how it has configured the GSIs. > >> > >> You haven't explained the use case. > > > > Sure. One example here is to send an interrupt from a virtio driver > > (e.g. the vhost-pci-net that we are working on) on a guest to a > > virtio-net device on another guest. In terms of injecting an interrupt > > to the virtio-net device, should we give the sender the related GSI > > assigned to the virtio-net device (i.e. the GSI of an RX queue, to > > notify the virtio-net driver to receive packets from that RX queue)? > > In terms of vhost-pci, a write to an MMIO register on the vhost side (the guest- > >host doorbell) would trigger an irq on the virtio side (the > host->guest doorbell). Yes, that's the traditional mechanism - ioeventfd and irqfd. they're fine for the current "guest virtio <->host" notification. But when it comes to the "guest virtio<-->guest virtio" notification case, it should be clear where the interrupt should go to (e.g. which specific device interrupt it is), rather than just trapping to the host. So, instead of simply trapping to the host by an MMIO write, hypercall gives us the flexibility to pass some parameters. > There is no need to know GSIs, they are entirely hidden in QEMU. The GSI number is assigned in QEMU. By making use of the traditional irqfd implementation code in QEMU, a virtq's GSI is stored in the irqfd struct - "VirtIOIRQFD->virq", we can pass it or them(the multi-queue case) to the sender. I prefer GSI, because the KVM irq routing table is indexed by GSI. Would this be acceptable? Alternatively, we can pass the vector of the virtq. Best, Wei
On Friday, October 14, 2016 6:51:53 PM, Wei W Wang" <wei.w.wang@intel.com> wrote: > When it comes to the "guest virtio<-->guest virtio" notification case, it > should be clear where the interrupt should go to (e.g. which specific device > interrupt it is), rather than just trapping to the host. So, instead of > simply trapping to the host by an MMIO write, hypercall gives us the > flexibility to pass some parameters. What parameters do you need? There is no difference between "which specific device interrupt you are raising" and "which specific virtqueue you are kicking". The latter uses ioeventfd just fine, and VFIO also uses eventfd successfully. > The GSI number is assigned in QEMU. By making use of the traditional irqfd > implementation code in QEMU, a virtq's GSI is stored in the irqfd struct - > "VirtIOIRQFD->virq", we can pass it or them(the multi-queue case) to the > sender. I prefer GSI, because the KVM irq routing table is indexed by GSI. > Would this be acceptable? > Alternatively, we can pass the vector of the virtq. No, the hypercall will not be accepted in any form. The established protocols for communication between KVM and the outside world, including other KVM instances, are MMIO write and irqfd. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Saturday, October 15, 2016 2:30 AM, Paolo Bonzini wrote: > On Friday, October 14, 2016 6:51:53 PM, Wei W Wang" > <wei.w.wang@intel.com> wrote: > > When it comes to the "guest virtio<-->guest virtio" notification case, > > it should be clear where the interrupt should go to (e.g. which > > specific device interrupt it is), rather than just trapping to the > > host. So, instead of simply trapping to the host by an MMIO write, > > hypercall gives us the flexibility to pass some parameters. > > What parameters do you need? There is no difference between "which specific > device interrupt you are raising" and "which specific virtqueue you are kicking". > The latter uses ioeventfd just fine, and VFIO also uses eventfd successfully. We need two parameters: destination UUID and GSI, to identify the destination VM and the destination queue interrupt. Please let me elaborate on the two possible solutions based on the existing eventfd mechanism and the new hypercall mechanism - how can we use them to achieve the notification from virtio1 driver to virtio2 driver (across world contexts). We can't directly deliver interrupts from virtio1 driver to virtio2 driver, so here, for both solutions, we need a trampoline - the host. A uuid field is necessary to be added to the kvm struct, so that the trampoline can know who is who. Generally, two steps are needed: Step1: virtio1's driver sends the interrupt request to the trampoline; Step2: the trampoline sends the interrupt request to virtio2's driver. *Solution 1. eventfd Step1: achieved by virtio1's ioeventfd; Step2: achieved by virtio2's irqfd. In the setup phase, the trampoline makes a connection between virtio1's ioeventfd and virtio2's irqfd. So, in this solution, we would need a host kernel module to do the trampoline work - connection setup and interrupt request delivery. *Solution 2. hypercall Step1: achieved by hypercall Step2: achieved by interrupt injection with GSI We only need to patch the hypercall handler to inject the interrupt to the destination. Pros & Cons: From the performance point of view, the eventfd solution has a much longer code path (if we go through the whole code path how they are handled), which results in longer latency. From the design point of view, I think using hypercall makes the design simple and straightforward. > > The GSI number is assigned in QEMU. By making use of the traditional > > irqfd implementation code in QEMU, a virtq's GSI is stored in the > > irqfd struct - "VirtIOIRQFD->virq", we can pass it or them(the > > multi-queue case) to the sender. I prefer GSI, because the KVM irq routing > table is indexed by GSI. > > Would this be acceptable? > > Alternatively, we can pass the vector of the virtq. > > No, the hypercall will not be accepted in any form. The established protocols > for communication between KVM and the outside world, including other KVM > instances, are MMIO write and irqfd. Could you please give more details about why hypercall is not welcomed, given the fact that it has already been implemented in KVM for some usages? Thanks. Best, Wei
On 17/10/2016 08:47, Wang, Wei W wrote: > Please let me elaborate on the two possible solutions based on the > existing eventfd mechanism and the new hypercall mechanism - how can > we use them to achieve the notification from virtio1 driver to > virtio2 driver (across world contexts). We can't directly deliver > interrupts from virtio1 driver to virtio2 driver, so here, for both > solutions, we need a trampoline - the host. A uuid field is necessary > to be added to the kvm struct, so that the trampoline can know who is > who. This is already problematic. KVM tries really, really hard to avoid any global state across VMs. If you define a global UUID, you'll also have to design how to make it safe against multiple users of KVM, and how it interacts with features like user namespace. And you'll also have to explain it to me, since I'm not at all a security expert. That may be harder than the design. :) > Generally, two steps are needed: > Step1: virtio1's driver sends the interrupt request to the trampoline; > Step2: the trampoline sends the interrupt request to virtio2's driver. > > *Solution 1. eventfd > Step1: achieved by virtio1's ioeventfd; > Step2: achieved by virtio2's irqfd. > > In the setup phase, the trampoline makes a connection between > virtio1's ioeventfd and virtio2's irqfd. So, in this solution, we would > need a host kernel module to do the trampoline work - connection setup > and interrupt request delivery. No, you don't! The point is that you can pass the same file descriptor to KVM_IOEVENTFD and KVM_IRQFD. The virtio-net VM can pass the irqfd to the vhost-net VM, via the vhost socket. This is exactly how things work for vhost-user. vhost-pci can additionally use the received file descriptor as the ioeventfd. >> No, the hypercall will not be accepted in any form. The established protocols >> for communication between KVM and the outside world, including other KVM >> instances, are MMIO write and irqfd. > > Could you please give more details about why hypercall is not > welcomed, given the fact that it has already been implemented in KVM for > some usages? Thanks. Well, hypercalls aren't really that common in KVM. :) There are exactly two, and one of them does nothing except force a vmexit. Anyway, here are four good reasons why this hypercall is not welcome: 1) irqfd seems to be fast enough for VFIO and existing vhost backends, so it should be fast enough for vhost-pci as well; 2) if irqfd is not fast enough, optimizing it would benefit VFIO and existing vhost backends, so we should first look into that anyway; 3) vhost-pci's host part should be basically a vhost-user backend implemented by QEMU. Any deviation from that should be considered very carefully; 4) vhost-pci's first use case should be with DPDK, which does polling anyway, not interrupts. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Monday, October 17, 2016 4:59 PM, Paolo Bonzini wrote: > On 17/10/2016 08:47, Wang, Wei W wrote: > Well, hypercalls aren't really that common in KVM. :) There are exactly two, and > one of them does nothing except force a vmexit. > > Anyway, here are four good reasons why this hypercall is not welcome: > > 1) irqfd seems to be fast enough for VFIO and existing vhost backends, so it > should be fast enough for vhost-pci as well; > > 2) if irqfd is not fast enough, optimizing it would benefit VFIO and existing vhost > backends, so we should first look into that anyway; > > 3) vhost-pci's host part should be basically a vhost-user backend implemented by > QEMU. Any deviation from that should be considered very carefully; > > 4) vhost-pci's first use case should be with DPDK, which does polling anyway, not > interrupts. > Thanks Paolo for the comments. I will take your suggestions and send out a new version of the design. Best, Wei
diff --git a/pv_interrupt_controller.c b/pv_interrupt_controller.c new file mode 100644 index 0000000..5f2431d --- /dev/null +++ b/pv_interrupt_controller.c @@ -0,0 +1,27 @@ + +The pv interrupt (PVI) hypercall is proposed to support one guest sending +interrupts to another guest using hypercalls. The following pseduocode shows how +a PVI is sent from the guest: + +#define KVM_HC_PVI 9 +kvm_hypercall2(KVM_HC_PVI, guest_uuid, guest_gsi); + +The new hypercall number, KVM_HC_PVI, is used for the purpose of sending PVIs. +guest_uuid is used to identify the guest that the interrupt will be sent to. +guest_gsi identifies the interrupt source of that guest. + +The PVI hypercall handler in KVM iterates the VM list (the vm_list field in +the kvm struct), finds the guest with the passed guest_uuid, and injects an +interrupt to the guest with the guest_gsi number. + +Finally, it's about the permission of sending PVI from one guest to another. +In the PVI setup phase, the PVI receiver should get the sender's UUID (e.g. via +the vhost-user protocol extension implemented between QEMUs), and pass it to KVM. +Two new fields will be added to the struct kvm{ }: + ++uuid_t uuid; // the guest uuid ++uuid_t pvi_sender_uuid[MAX_NUM]; // the sender's uuid should be registered here + +PVI will not be injected to the receiver guest if the sender's uuid does not appear +in the receiver's pvi_sender_uuid table. +
PV interrupts (PVI) enables a guest to send interrupts to another via hypercalls. Signed-off-by: Wei Wang <wei.w.wang@intel.com> --- pv_interrupt_controller.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) create mode 100644 pv_interrupt_controller.c