Message ID | 1486544452-28593-1-git-send-email-rcojocaru@bitdefender.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, Feb 8, 2017 at 2:00 AM, Razvan Cojocaru <rcojocaru@bitdefender.com> wrote: > It is currently possible for the guest to lock when subscribing > to synchronous vm_events if max_vcpus is larger than the > number of available ring buffer slots. This patch no longer > blocks already paused VCPUs, fixing the issue for this use > case. > > Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com> > --- > xen/common/vm_event.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c > index 82ce8f1..2005a64 100644 > --- a/xen/common/vm_event.c > +++ b/xen/common/vm_event.c > @@ -316,7 +316,8 @@ void vm_event_put_request(struct domain *d, > * See the comments above wake_blocked() for more information > * on how this mechanism works to avoid waiting. */ > avail_req = vm_event_ring_available(ved); > - if( current->domain == d && avail_req < d->max_vcpus ) > + if( current->domain == d && avail_req < d->max_vcpus && > + !atomic_read( ¤t->vm_event_pause_count ) ) > vm_event_mark_and_pause(current, ved); > Hi Razvan, I would also like to have the change made in this patch that unblocks the vCPUs as soon as a spot opens up on the ring. Doing just what this patch has will not solve the problem if there are asynchronous events used. Tamas
On 02/08/2017 06:25 PM, Tamas K Lengyel wrote: > > > On Wed, Feb 8, 2017 at 2:00 AM, Razvan Cojocaru > <rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com>> wrote: > > It is currently possible for the guest to lock when subscribing > to synchronous vm_events if max_vcpus is larger than the > number of available ring buffer slots. This patch no longer > blocks already paused VCPUs, fixing the issue for this use > case. > > Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com > <mailto:rcojocaru@bitdefender.com>> > --- > xen/common/vm_event.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c > index 82ce8f1..2005a64 100644 > --- a/xen/common/vm_event.c > +++ b/xen/common/vm_event.c > @@ -316,7 +316,8 @@ void vm_event_put_request(struct domain *d, > * See the comments above wake_blocked() for more information > * on how this mechanism works to avoid waiting. */ > avail_req = vm_event_ring_available(ved); > - if( current->domain == d && avail_req < d->max_vcpus ) > + if( current->domain == d && avail_req < d->max_vcpus && > + !atomic_read( ¤t->vm_event_pause_count ) ) > vm_event_mark_and_pause(current, ved); > > > Hi Razvan, > I would also like to have the change made in this patch that unblocks > the vCPUs as soon as a spot opens up on the ring. Doing just what this > patch has will not solve the problem if there are asynchronous events used. Fair enough, I thought that might need more discussion and thus put into a subsequent patch, but I'll modify that as well, give it a spin and submit V2. Thanks, Razvan
On 08/02/17 17:08, Razvan Cojocaru wrote: > On 02/08/2017 06:25 PM, Tamas K Lengyel wrote: >> >> On Wed, Feb 8, 2017 at 2:00 AM, Razvan Cojocaru >> <rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com>> wrote: >> >> It is currently possible for the guest to lock when subscribing >> to synchronous vm_events if max_vcpus is larger than the >> number of available ring buffer slots. This patch no longer >> blocks already paused VCPUs, fixing the issue for this use >> case. >> >> Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com >> <mailto:rcojocaru@bitdefender.com>> >> --- >> xen/common/vm_event.c | 3 ++- >> 1 file changed, 2 insertions(+), 1 deletion(-) >> >> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c >> index 82ce8f1..2005a64 100644 >> --- a/xen/common/vm_event.c >> +++ b/xen/common/vm_event.c >> @@ -316,7 +316,8 @@ void vm_event_put_request(struct domain *d, >> * See the comments above wake_blocked() for more information >> * on how this mechanism works to avoid waiting. */ >> avail_req = vm_event_ring_available(ved); >> - if( current->domain == d && avail_req < d->max_vcpus ) >> + if( current->domain == d && avail_req < d->max_vcpus && >> + !atomic_read( ¤t->vm_event_pause_count ) ) >> vm_event_mark_and_pause(current, ved); >> >> >> Hi Razvan, >> I would also like to have the change made in this patch that unblocks >> the vCPUs as soon as a spot opens up on the ring. Doing just what this >> patch has will not solve the problem if there are asynchronous events used. > Fair enough, I thought that might need more discussion and thus put into > a subsequent patch, but I'll modify that as well, give it a spin and > submit V2. If you are spinning a v2, please pull current out into a variable at the start (more efficient), and atomic_read() doesn't need spaces inside. ~Andrew
On 02/08/2017 07:20 PM, Andrew Cooper wrote: > On 08/02/17 17:08, Razvan Cojocaru wrote: >> On 02/08/2017 06:25 PM, Tamas K Lengyel wrote: >>> >>> On Wed, Feb 8, 2017 at 2:00 AM, Razvan Cojocaru >>> <rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com>> wrote: >>> >>> It is currently possible for the guest to lock when subscribing >>> to synchronous vm_events if max_vcpus is larger than the >>> number of available ring buffer slots. This patch no longer >>> blocks already paused VCPUs, fixing the issue for this use >>> case. >>> >>> Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com >>> <mailto:rcojocaru@bitdefender.com>> >>> --- >>> xen/common/vm_event.c | 3 ++- >>> 1 file changed, 2 insertions(+), 1 deletion(-) >>> >>> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c >>> index 82ce8f1..2005a64 100644 >>> --- a/xen/common/vm_event.c >>> +++ b/xen/common/vm_event.c >>> @@ -316,7 +316,8 @@ void vm_event_put_request(struct domain *d, >>> * See the comments above wake_blocked() for more information >>> * on how this mechanism works to avoid waiting. */ >>> avail_req = vm_event_ring_available(ved); >>> - if( current->domain == d && avail_req < d->max_vcpus ) >>> + if( current->domain == d && avail_req < d->max_vcpus && >>> + !atomic_read( ¤t->vm_event_pause_count ) ) >>> vm_event_mark_and_pause(current, ved); >>> >>> >>> Hi Razvan, >>> I would also like to have the change made in this patch that unblocks >>> the vCPUs as soon as a spot opens up on the ring. Doing just what this >>> patch has will not solve the problem if there are asynchronous events used. >> Fair enough, I thought that might need more discussion and thus put into >> a subsequent patch, but I'll modify that as well, give it a spin and >> submit V2. > > If you are spinning a v2, please pull current out into a variable at the > start (more efficient), and atomic_read() doesn't need spaces inside. Will do. Thanks, Razvan
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 82ce8f1..2005a64 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -316,7 +316,8 @@ void vm_event_put_request(struct domain *d, * See the comments above wake_blocked() for more information * on how this mechanism works to avoid waiting. */ avail_req = vm_event_ring_available(ved); - if( current->domain == d && avail_req < d->max_vcpus ) + if( current->domain == d && avail_req < d->max_vcpus && + !atomic_read( ¤t->vm_event_pause_count ) ) vm_event_mark_and_pause(current, ved); vm_event_ring_unlock(ved);
It is currently possible for the guest to lock when subscribing to synchronous vm_events if max_vcpus is larger than the number of available ring buffer slots. This patch no longer blocks already paused VCPUs, fixing the issue for this use case. Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com> --- xen/common/vm_event.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)