Message ID | cover.1559224640.git.ppircalabu@bitdefender.com (mailing list archive) |
---|---|
Headers | show |
Series | Per vcpu vm_event channels | expand |
On Thu, May 30, 2019 at 7:18 AM Petre Pircalabu <ppircalabu@bitdefender.com> wrote: > > This patchset adds a new mechanism of sending synchronous vm_event > requests and handling vm_event responses without using a ring. > As each synchronous request pauses the vcpu until the corresponding > response is handled, it can be stored in a slotted memory buffer > (one per vcpu) shared between the hypervisor and the controlling domain. > > The main advantages of this approach are: > - the ability to dynamicaly allocate the necessary memory used to hold > the requests/responses (the size of vm_event_request_t/vm_event_response_t > can grow unrestricted by the ring's one page limitation) > - the ring's waitqueue logic is unnecessary in this case because the > vcpu sending the request is blocked until a response is received. Hi Petre, could you push this series as a git branch somewhere? Thanks, Tamas
On Thu, 2019-05-30 at 08:27 -0700, Tamas K Lengyel wrote: > On Thu, May 30, 2019 at 7:18 AM Petre Pircalabu > <ppircalabu@bitdefender.com> wrote: > > > > This patchset adds a new mechanism of sending synchronous vm_event > > requests and handling vm_event responses without using a ring. > > As each synchronous request pauses the vcpu until the corresponding > > response is handled, it can be stored in a slotted memory buffer > > (one per vcpu) shared between the hypervisor and the controlling > > domain. > > > > The main advantages of this approach are: > > - the ability to dynamicaly allocate the necessary memory used to > > hold > > the requests/responses (the size of > > vm_event_request_t/vm_event_response_t > > can grow unrestricted by the ring's one page limitation) > > - the ring's waitqueue logic is unnecessary in this case because > > the > > vcpu sending the request is blocked until a response is received. > > Hi Petre, > could you push this series as a git branch somewhere? > > Thanks, > Tamas Hi Tamas, I've pushed the changes to https://github.com/petrepircalabu/xen/tree/vm_event_ng/devel Thank-you very much for your support, Petre
On 30/05/2019 07:18, Petre Pircalabu wrote: > This patchset adds a new mechanism of sending synchronous vm_event > requests and handling vm_event responses without using a ring. > As each synchronous request pauses the vcpu until the corresponding > response is handled, it can be stored in a slotted memory buffer > (one per vcpu) shared between the hypervisor and the controlling domain. > > The main advantages of this approach are: > - the ability to dynamicaly allocate the necessary memory used to hold > the requests/responses (the size of vm_event_request_t/vm_event_response_t > can grow unrestricted by the ring's one page limitation) > - the ring's waitqueue logic is unnecessary in this case because the > vcpu sending the request is blocked until a response is received. > Before I review patches 7-9 for more than stylistic things, can you briefly describe what's next? AFACT, this introduces a second interface between Xen and the agent, which is limited to synchronous events only, and exclusively uses slotted system per vcpu, with a per-vcpu event channel? What (if any) are the future development plans, and what are the plans for deprecating the use of the old interface? (The answers to these will affect my review of the new interface). ~Andrew
On Fri, 2019-05-31 at 17:25 -0700, Andrew Cooper wrote: > On 30/05/2019 07:18, Petre Pircalabu wrote: > > This patchset adds a new mechanism of sending synchronous vm_event > > requests and handling vm_event responses without using a ring. > > As each synchronous request pauses the vcpu until the corresponding > > response is handled, it can be stored in a slotted memory buffer > > (one per vcpu) shared between the hypervisor and the controlling > > domain. > > > > The main advantages of this approach are: > > - the ability to dynamicaly allocate the necessary memory used to > > hold > > the requests/responses (the size of > > vm_event_request_t/vm_event_response_t > > can grow unrestricted by the ring's one page limitation) > > - the ring's waitqueue logic is unnecessary in this case because > > the > > vcpu sending the request is blocked until a response is received. > > > > Before I review patches 7-9 for more than stylistic things, can you > briefly describe what's next? > > AFACT, this introduces a second interface between Xen and the agent, > which is limited to synchronous events only, and exclusively uses > slotted system per vcpu, with a per-vcpu event channel? Using a distinct interface was proposed by George in order to allow the existing vm_event clients to run unmodified. > > What (if any) are the future development plans, and what are the > plans > for deprecating the use of the old interface? (The answers to these > will affect my review of the new interface). > > ~Andrew > At the moment, we're only using sync vm_events, so the "one slot per vcpu" approach suits us. Also, by allocating dynamically the vm_event_requests/responses, we can increase their size without suffering the performance drop incurred when using the ring (+waitqueue). At this moment, we don't have a schedule to deprecate the legacy (ring based) interface, but we will adapt the new interface based on the feedback we receive from other vm_event users. > ________________________ > This email was scanned by Bitdefender