diff mbox

[RFC,v1,4/4] util/oslib-win32: Recursivly pass the timeout

Message ID 4b48ca3f3c45bee1103bcb50d87e6d5cadc88d6f.1498607452.git.alistair.francis@xilinx.com (mailing list archive)
State New, archived
Headers show

Commit Message

Alistair Francis June 27, 2017, 11:57 p.m. UTC
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
---

 util/oslib-win32.c | 21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

Comments

Fam Zheng June 29, 2017, 9:38 a.m. UTC | #1
On Tue, 06/27 16:57, Alistair Francis wrote:
> Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> ---
> 
>  util/oslib-win32.c | 21 +++++++++++++++++----
>  1 file changed, 17 insertions(+), 4 deletions(-)
> 
> diff --git a/util/oslib-win32.c b/util/oslib-win32.c
> index a015e1ac96..3630e46499 100644
> --- a/util/oslib-win32.c
> +++ b/util/oslib-win32.c
> @@ -432,10 +432,10 @@ static int poll_rest(gboolean poll_msgs, HANDLE *handles, gint nhandles,
>              }
>          }
>  
> -        /* If no timeout and polling several handles, recurse to poll
> -         * the rest of them.
> +        /* We only found one and we are waiting on more then one. Let's try
> +         * again.
>           */
> -        if (timeout == 0 && nhandles > 1) {
> +        if (nhandles > 1) {
>              /* Remove the handle that fired */
>              int i;
>              if ((ready - WAIT_OBJECT_0) < nhandles - 1) {
> @@ -444,7 +444,20 @@ static int poll_rest(gboolean poll_msgs, HANDLE *handles, gint nhandles,
>                  }
>              }
>              nhandles--;
> -            recursed_result = poll_rest(FALSE, handles, nhandles, fds, nfds, 0);
> +
> +            /* If we just had a very small timeout let's increase it when we
> +             * recurse to ensure we don't just busy wait. This ensures we let
> +             * the Windows threads block at least a little. If we previously
> +             * had some wait let's set it to zero to avoid blocking for too
> +             * long.
> +             */
> +            if (timeout < 10) {
> +                timeout = timeout + 1;
> +            } else {
> +                timeout = 0;
> +            }
> +            recursed_result = poll_rest(FALSE, handles, nhandles, fds,
> +                                        nfds, timeout);
>              return (recursed_result == -1) ? -1 : 1 + recursed_result;
>          }
>          return 1;
> -- 
> 2.11.0
> 

This is a hack, can we fix what is the causing the busy wait instead?

Fam
Paolo Bonzini June 29, 2017, 12:34 p.m. UTC | #2
On 28/06/2017 01:57, Alistair Francis wrote:
> +        /* We only found one and we are waiting on more then one. Let's try
> +         * again.
>           */
> -        if (timeout == 0 && nhandles > 1) {
> +        if (nhandles > 1) {
>              /* Remove the handle that fired */
>              int i;
>              if ((ready - WAIT_OBJECT_0) < nhandles - 1) {
> @@ -444,7 +444,20 @@ static int poll_rest(gboolean poll_msgs, HANDLE *handles, gint nhandles,
>                  }
>              }
>              nhandles--;
> -            recursed_result = poll_rest(FALSE, handles, nhandles, fds, nfds, 0);
> +
> +            /* If we just had a very small timeout let's increase it when we
> +             * recurse to ensure we don't just busy wait. This ensures we let
> +             * the Windows threads block at least a little. If we previously
> +             * had some wait let's set it to zero to avoid blocking for too
> +             * long.
> +             */
> +            if (timeout < 10) {
> +                timeout = timeout + 1;
> +            } else {
> +                timeout = 0;
> +            }
> +            recursed_result = poll_rest(FALSE, handles, nhandles, fds,
> +                                        nfds, timeout);
>              return (recursed_result == -1) ? -1 : 1 + recursed_result;

I'm not sure I agree with this change, which is effectively delaying the
processing of events.  The question to me is which handles are
triggering so fast that QEMU effectively busy waits.

Maybe your QEMUs can get some breath with commit 12f8def0e0 ("win32:
replace custom mutex and condition variable with native primitives",
2017-03-27), since the native primitives are more efficient and TCG 2.8
used condvars a lot for qemu_io_proceeded_cond.

Paolo
Alistair Francis June 29, 2017, 4:33 p.m. UTC | #3
On Thu, Jun 29, 2017 at 5:34 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
>
> On 28/06/2017 01:57, Alistair Francis wrote:
>> +        /* We only found one and we are waiting on more then one. Let's try
>> +         * again.
>>           */
>> -        if (timeout == 0 && nhandles > 1) {
>> +        if (nhandles > 1) {
>>              /* Remove the handle that fired */
>>              int i;
>>              if ((ready - WAIT_OBJECT_0) < nhandles - 1) {
>> @@ -444,7 +444,20 @@ static int poll_rest(gboolean poll_msgs, HANDLE *handles, gint nhandles,
>>                  }
>>              }
>>              nhandles--;
>> -            recursed_result = poll_rest(FALSE, handles, nhandles, fds, nfds, 0);
>> +
>> +            /* If we just had a very small timeout let's increase it when we
>> +             * recurse to ensure we don't just busy wait. This ensures we let
>> +             * the Windows threads block at least a little. If we previously
>> +             * had some wait let's set it to zero to avoid blocking for too
>> +             * long.
>> +             */
>> +            if (timeout < 10) {
>> +                timeout = timeout + 1;
>> +            } else {
>> +                timeout = 0;
>> +            }
>> +            recursed_result = poll_rest(FALSE, handles, nhandles, fds,
>> +                                        nfds, timeout);
>>              return (recursed_result == -1) ? -1 : 1 + recursed_result;
>
> I'm not sure I agree with this change, which is effectively delaying the
> processing of events.  The question to me is which handles are
> triggering so fast that QEMU effectively busy waits.

Yeah, that is what I was trying to figure out, but didn't make much headway.

I kept seeing zero timeouts, which means that the thread never blocks
and this patch helps a lot.

>
> Maybe your QEMUs can get some breath with commit 12f8def0e0 ("win32:
> replace custom mutex and condition variable with native primitives",
> 2017-03-27), since the native primitives are more efficient and TCG 2.8
> used condvars a lot for qemu_io_proceeded_cond.

Ok, I will try that.

Does this mean you don't see the same slowness on QEMU 2.9?

Thanks,
Alistair

>
> Paolo
Paolo Bonzini June 29, 2017, 7:47 p.m. UTC | #4
> On Thu, Jun 29, 2017 at 5:34 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> > On 28/06/2017 01:57, Alistair Francis wrote:
> > I'm not sure I agree with this change, which is effectively delaying the
> > processing of events.  The question to me is which handles are
> > triggering so fast that QEMU effectively busy waits.
> 
> Yeah, that is what I was trying to figure out, but didn't make much headway.
> 
> I kept seeing zero timeouts, which means that the thread never blocks
> and this patch helps a lot.

Perhaps you can use tracepoints?  There shouldn't be many handles registered,
since on Windows even the GUI actions all go through messages.

> > Maybe your QEMUs can get some breath with commit 12f8def0e0 ("win32:
> > replace custom mutex and condition variable with native primitives",
> > 2017-03-27), since the native primitives are more efficient and TCG 2.8
> > used condvars a lot for qemu_io_proceeded_cond.
> 
> Ok, I will try that.
> 
> Does this mean you don't see the same slowness on QEMU 2.9?

I have not tried, but the patch is only working around the real issue,
as Fam pointed out.

Paolo
Alistair Francis June 29, 2017, 9:17 p.m. UTC | #5
On Thu, Jun 29, 2017 at 12:47 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> On Thu, Jun 29, 2017 at 5:34 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> > On 28/06/2017 01:57, Alistair Francis wrote:
>> > I'm not sure I agree with this change, which is effectively delaying the
>> > processing of events.  The question to me is which handles are
>> > triggering so fast that QEMU effectively busy waits.
>>
>> Yeah, that is what I was trying to figure out, but didn't make much headway.
>>
>> I kept seeing zero timeouts, which means that the thread never blocks
>> and this patch helps a lot.
>
> Perhaps you can use tracepoints?  There shouldn't be many handles registered,
> since on Windows even the GUI actions all go through messages.
>
>> > Maybe your QEMUs can get some breath with commit 12f8def0e0 ("win32:
>> > replace custom mutex and condition variable with native primitives",
>> > 2017-03-27), since the native primitives are more efficient and TCG 2.8
>> > used condvars a lot for qemu_io_proceeded_cond.
>>
>> Ok, I will try that.

No luck, I tried applying that and it didn't make any difference.

>>
>> Does this mean you don't see the same slowness on QEMU 2.9?
>
> I have not tried, but the patch is only working around the real issue,
> as Fam pointed out.

Agreed. I'll keep digging and see what I can find.

Thanks,
Alistair

>
> Paolo
>
diff mbox

Patch

diff --git a/util/oslib-win32.c b/util/oslib-win32.c
index a015e1ac96..3630e46499 100644
--- a/util/oslib-win32.c
+++ b/util/oslib-win32.c
@@ -432,10 +432,10 @@  static int poll_rest(gboolean poll_msgs, HANDLE *handles, gint nhandles,
             }
         }
 
-        /* If no timeout and polling several handles, recurse to poll
-         * the rest of them.
+        /* We only found one and we are waiting on more then one. Let's try
+         * again.
          */
-        if (timeout == 0 && nhandles > 1) {
+        if (nhandles > 1) {
             /* Remove the handle that fired */
             int i;
             if ((ready - WAIT_OBJECT_0) < nhandles - 1) {
@@ -444,7 +444,20 @@  static int poll_rest(gboolean poll_msgs, HANDLE *handles, gint nhandles,
                 }
             }
             nhandles--;
-            recursed_result = poll_rest(FALSE, handles, nhandles, fds, nfds, 0);
+
+            /* If we just had a very small timeout let's increase it when we
+             * recurse to ensure we don't just busy wait. This ensures we let
+             * the Windows threads block at least a little. If we previously
+             * had some wait let's set it to zero to avoid blocking for too
+             * long.
+             */
+            if (timeout < 10) {
+                timeout = timeout + 1;
+            } else {
+                timeout = 0;
+            }
+            recursed_result = poll_rest(FALSE, handles, nhandles, fds,
+                                        nfds, timeout);
             return (recursed_result == -1) ? -1 : 1 + recursed_result;
         }
         return 1;