Message ID | 1410796949-2221-4-git-send-email-akong@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, 16 Sep 2014 00:02:29 +0800 Amos Kong <akong@redhat.com> wrote: > This patch increases the schedule timeout to 10 jiffies, it's more > appropriate, then other takes can easy to hold the mutex lock. > > Signed-off-by: Amos Kong <akong@redhat.com> > --- > drivers/char/hw_random/core.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c > index 263a370..b5d1b6f 100644 > --- a/drivers/char/hw_random/core.c > +++ b/drivers/char/hw_random/core.c > @@ -195,7 +195,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf, > > mutex_unlock(&rng_mutex); > > - schedule_timeout_interruptible(1); > + schedule_timeout_interruptible(10); > > if (signal_pending(current)) { > err = -ERESTARTSYS; Does a schedule of 1 ms or 10 ms decrease the throughput? I think we need some benchmarks.
On Mon, Sep 15, 2014 at 06:13:31PM +0200, Michael Büsch wrote: > On Tue, 16 Sep 2014 00:02:29 +0800 > Amos Kong <akong@redhat.com> wrote: > > > This patch increases the schedule timeout to 10 jiffies, it's more > > appropriate, then other takes can easy to hold the mutex lock. > > > > Signed-off-by: Amos Kong <akong@redhat.com> > > --- > > drivers/char/hw_random/core.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c > > index 263a370..b5d1b6f 100644 > > --- a/drivers/char/hw_random/core.c > > +++ b/drivers/char/hw_random/core.c > > @@ -195,7 +195,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf, > > > > mutex_unlock(&rng_mutex); > > > > - schedule_timeout_interruptible(1); > > + schedule_timeout_interruptible(10); > > > > if (signal_pending(current)) { > > err = -ERESTARTSYS; > > Does a schedule of 1 ms or 10 ms decrease the throughput? In my test environment, 1 jiffe always works (100%), as suggested by Amit 10 jiffes is more appropriate. After applied current 3 patches, there is a throughput regression. 1.2 M/s -> 6 K/s We can only schedule in the end of loop (size == 0), and only for non-smp guest. So smp guest won't be effected. | if (!size && num_online_cpus() == 1) | schedule_timeout_interruptible(timeout); Set timeout to 1: non-smp guest with quick backend (1.2M/s) -> about 49K/s) Set timeout to 10: non-smp guest with quick backend (1.2M/s) -> about 490K/s) We might need other benchmark to test the performance, but we can see the bug clearly caused a regression. As we discussed in other thread, need_resched() should work in this case, so those patches might be wrong fixing. > I think we need some benchmarks. > > -- > Michael
On Tue, 16 Sep 2014 08:27:40 +0800 Amos Kong <akong@redhat.com> wrote: > Set timeout to 10: > non-smp guest with quick backend (1.2M/s) -> about 490K/s) That sounds like an awful lot. This is a 60% loss in throughput. I don't think we can live with that.
diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c index 263a370..b5d1b6f 100644 --- a/drivers/char/hw_random/core.c +++ b/drivers/char/hw_random/core.c @@ -195,7 +195,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf, mutex_unlock(&rng_mutex); - schedule_timeout_interruptible(1); + schedule_timeout_interruptible(10); if (signal_pending(current)) { err = -ERESTARTSYS;
This patch increases the schedule timeout to 10 jiffies, it's more appropriate, then other takes can easy to hold the mutex lock. Signed-off-by: Amos Kong <akong@redhat.com> --- drivers/char/hw_random/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)