Message ID | 153538379617.18303.11871598131511120870.stgit@localhost.localdomain (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Extract bg queue logic out fuse_conn::lock | expand |
On Mon, Aug 27, 2018 at 06:29:56PM +0300, Kirill Tkhai wrote: > Currently, we take fc->lock there only to check for fc->connected. > But this flag is changed only on connection abort, which is very > rare operation. Good thing looks to make fuse_request_send_background() > faster, while fuse_abort_conn() slowler. > > So, we make fuse_request_send_background() lockless and mark > (fc->connected == 1) region as RCU-protected. Abort function > just uses synchronize_sched() to wait till all pending background > requests is being queued, and then makes ordinary abort. > > Note, that synchronize_sched() is used instead of synchronize_rcu(), > since we want to check for fc->connected without rcu_dereference() > in fuse_request_send_background() (i.e., not to add memory barriers > to this hot path). Apart from the inaccuracies in the above (_sched variant is for scheduling and NMI taking code; _sched variant requires rcu_dereference() as well; rcu_dereference() does not add barriers; rcu_dereference() is only for pointers, so we can't use it for an integer), wouldn't it be simpler to just use bg_lock for checking ->connected, and lock bg_lock (as well as fc->lock) when setting ->connected? Updated patch below (untested). Thanks, Miklos --- Subject: fuse: do not take fc->lock in fuse_request_send_background() From: Kirill Tkhai <ktkhai@virtuozzo.com> Date: Mon, 27 Aug 2018 18:29:56 +0300 Currently, we take fc->lock there only to check for fc->connected. But this flag is changed only on connection abort, which is very rare operation. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: Miklos Szeredi <mszeredi@redhat.com> --- fs/fuse/dev.c | 46 +++++++++++++++++++++++----------------------- fs/fuse/file.c | 4 +++- fs/fuse/fuse_i.h | 4 +--- 3 files changed, 27 insertions(+), 27 deletions(-) --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -574,42 +574,38 @@ ssize_t fuse_simple_request(struct fuse_ return ret; } -/* - * Called under fc->lock - * - * fc->connected must have been checked previously - */ -void fuse_request_send_background_nocheck(struct fuse_conn *fc, - struct fuse_req *req) +bool fuse_request_queue_background(struct fuse_conn *fc, struct fuse_req *req) { - BUG_ON(!test_bit(FR_BACKGROUND, &req->flags)); + bool queued = false; + + WARN_ON(!test_bit(FR_BACKGROUND, &req->flags)); if (!test_bit(FR_WAITING, &req->flags)) { __set_bit(FR_WAITING, &req->flags); atomic_inc(&fc->num_waiting); } __set_bit(FR_ISREPLY, &req->flags); spin_lock(&fc->bg_lock); - fc->num_background++; - if (fc->num_background == fc->max_background) - fc->blocked = 1; - if (fc->num_background == fc->congestion_threshold && fc->sb) { - set_bdi_congested(fc->sb->s_bdi, BLK_RW_SYNC); - set_bdi_congested(fc->sb->s_bdi, BLK_RW_ASYNC); + if (likely(fc->connected)) { + fc->num_background++; + if (fc->num_background == fc->max_background) + fc->blocked = 1; + if (fc->num_background == fc->congestion_threshold && fc->sb) { + set_bdi_congested(fc->sb->s_bdi, BLK_RW_SYNC); + set_bdi_congested(fc->sb->s_bdi, BLK_RW_ASYNC); + } + list_add_tail(&req->list, &fc->bg_queue); + flush_bg_queue(fc); + queued = true; } - list_add_tail(&req->list, &fc->bg_queue); - flush_bg_queue(fc); spin_unlock(&fc->bg_lock); + + return queued; } void fuse_request_send_background(struct fuse_conn *fc, struct fuse_req *req) { - BUG_ON(!req->end); - spin_lock(&fc->lock); - if (fc->connected) { - fuse_request_send_background_nocheck(fc, req); - spin_unlock(&fc->lock); - } else { - spin_unlock(&fc->lock); + WARN_ON(!req->end); + if (!fuse_request_queue_background(fc, req)) { req->out.h.error = -ENOTCONN; req->end(fc, req); fuse_put_request(fc, req); @@ -2112,7 +2108,11 @@ void fuse_abort_conn(struct fuse_conn *f struct fuse_req *req, *next; LIST_HEAD(to_end); + /* Background queuing checks fc->connected under bg_lock */ + spin_lock(&fc->bg_lock); fc->connected = 0; + spin_unlock(&fc->bg_lock); + fc->aborted = is_abort; fuse_set_initialized(fc); list_for_each_entry(fud, &fc->devices, entry) { --- a/fs/fuse/fuse_i.h +++ b/fs/fuse/fuse_i.h @@ -863,9 +863,7 @@ ssize_t fuse_simple_request(struct fuse_ * Send a request in the background */ void fuse_request_send_background(struct fuse_conn *fc, struct fuse_req *req); - -void fuse_request_send_background_nocheck(struct fuse_conn *fc, - struct fuse_req *req); +bool fuse_request_queue_background(struct fuse_conn *fc, struct fuse_req *req); /* Abort all requests */ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort); --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -1487,6 +1487,7 @@ __acquires(fc->lock) struct fuse_inode *fi = get_fuse_inode(req->inode); struct fuse_write_in *inarg = &req->misc.write.in; __u64 data_size = req->num_pages * PAGE_SIZE; + bool queued; if (!fc->connected) goto out_free; @@ -1502,7 +1503,8 @@ __acquires(fc->lock) req->in.args[1].size = inarg->size; fi->writectr++; - fuse_request_send_background_nocheck(fc, req); + queued = fuse_request_queue_background(fc, req); + WARN_ON(!queued); return; out_free:
On 26.09.2018 15:25, Miklos Szeredi wrote: > On Mon, Aug 27, 2018 at 06:29:56PM +0300, Kirill Tkhai wrote: >> Currently, we take fc->lock there only to check for fc->connected. >> But this flag is changed only on connection abort, which is very >> rare operation. Good thing looks to make fuse_request_send_background() >> faster, while fuse_abort_conn() slowler. >> >> So, we make fuse_request_send_background() lockless and mark >> (fc->connected == 1) region as RCU-protected. Abort function >> just uses synchronize_sched() to wait till all pending background >> requests is being queued, and then makes ordinary abort. >> >> Note, that synchronize_sched() is used instead of synchronize_rcu(), >> since we want to check for fc->connected without rcu_dereference() >> in fuse_request_send_background() (i.e., not to add memory barriers >> to this hot path). > > Apart from the inaccuracies in the above (_sched variant is for scheduling and > NMI taking code; _sched variant requires rcu_dereference() as well; > rcu_dereference() does not add barriers; rcu_dereference() is only for pointers, > so we can't use it for an integer), Writing this I was inspired by expand_fdtable(). Yes, the description confuses, and we don't need rcu_dereference() since we do not touch memory pointed by __rcu pointer, we have no pointer at all. synchronize_sched() guarantees: On systems with more than one CPU, when synchronize_sched() returns, each CPU is guaranteed to have executed a full memory barrier since the end of its last RCU-sched read-side critical section whose beginning preceded the call to synchronize_sched(). (and rcu_dereference() unfolds in smp_read_barrier_depends(), which I mean as added barriers) But it does not so matter. I'm OK with the patch you updated. >wouldn't it be simpler to just use bg_lock > for checking ->connected, and lock bg_lock (as well as fc->lock) when setting > ->connected? > > Updated patch below (untested). Tested it. Works for me. Thanks, Kirill > > --- > Subject: fuse: do not take fc->lock in fuse_request_send_background() > From: Kirill Tkhai <ktkhai@virtuozzo.com> > Date: Mon, 27 Aug 2018 18:29:56 +0300 > > Currently, we take fc->lock there only to check for fc->connected. > But this flag is changed only on connection abort, which is very > rare operation. > > Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> > Signed-off-by: Miklos Szeredi <mszeredi@redhat.com> > --- > fs/fuse/dev.c | 46 +++++++++++++++++++++++----------------------- > fs/fuse/file.c | 4 +++- > fs/fuse/fuse_i.h | 4 +--- > 3 files changed, 27 insertions(+), 27 deletions(-) > > --- a/fs/fuse/dev.c > +++ b/fs/fuse/dev.c > @@ -574,42 +574,38 @@ ssize_t fuse_simple_request(struct fuse_ > return ret; > } > > -/* > - * Called under fc->lock > - * > - * fc->connected must have been checked previously > - */ > -void fuse_request_send_background_nocheck(struct fuse_conn *fc, > - struct fuse_req *req) > +bool fuse_request_queue_background(struct fuse_conn *fc, struct fuse_req *req) > { > - BUG_ON(!test_bit(FR_BACKGROUND, &req->flags)); > + bool queued = false; > + > + WARN_ON(!test_bit(FR_BACKGROUND, &req->flags)); > if (!test_bit(FR_WAITING, &req->flags)) { > __set_bit(FR_WAITING, &req->flags); > atomic_inc(&fc->num_waiting); > } > __set_bit(FR_ISREPLY, &req->flags); > spin_lock(&fc->bg_lock); > - fc->num_background++; > - if (fc->num_background == fc->max_background) > - fc->blocked = 1; > - if (fc->num_background == fc->congestion_threshold && fc->sb) { > - set_bdi_congested(fc->sb->s_bdi, BLK_RW_SYNC); > - set_bdi_congested(fc->sb->s_bdi, BLK_RW_ASYNC); > + if (likely(fc->connected)) { > + fc->num_background++; > + if (fc->num_background == fc->max_background) > + fc->blocked = 1; > + if (fc->num_background == fc->congestion_threshold && fc->sb) { > + set_bdi_congested(fc->sb->s_bdi, BLK_RW_SYNC); > + set_bdi_congested(fc->sb->s_bdi, BLK_RW_ASYNC); > + } > + list_add_tail(&req->list, &fc->bg_queue); > + flush_bg_queue(fc); > + queued = true; > } > - list_add_tail(&req->list, &fc->bg_queue); > - flush_bg_queue(fc); > spin_unlock(&fc->bg_lock); > + > + return queued; > } > > void fuse_request_send_background(struct fuse_conn *fc, struct fuse_req *req) > { > - BUG_ON(!req->end); > - spin_lock(&fc->lock); > - if (fc->connected) { > - fuse_request_send_background_nocheck(fc, req); > - spin_unlock(&fc->lock); > - } else { > - spin_unlock(&fc->lock); > + WARN_ON(!req->end); > + if (!fuse_request_queue_background(fc, req)) { > req->out.h.error = -ENOTCONN; > req->end(fc, req); > fuse_put_request(fc, req); > @@ -2112,7 +2108,11 @@ void fuse_abort_conn(struct fuse_conn *f > struct fuse_req *req, *next; > LIST_HEAD(to_end); > > + /* Background queuing checks fc->connected under bg_lock */ > + spin_lock(&fc->bg_lock); > fc->connected = 0; > + spin_unlock(&fc->bg_lock); > + > fc->aborted = is_abort; > fuse_set_initialized(fc); > list_for_each_entry(fud, &fc->devices, entry) { > --- a/fs/fuse/fuse_i.h > +++ b/fs/fuse/fuse_i.h > @@ -863,9 +863,7 @@ ssize_t fuse_simple_request(struct fuse_ > * Send a request in the background > */ > void fuse_request_send_background(struct fuse_conn *fc, struct fuse_req *req); > - > -void fuse_request_send_background_nocheck(struct fuse_conn *fc, > - struct fuse_req *req); > +bool fuse_request_queue_background(struct fuse_conn *fc, struct fuse_req *req); > > /* Abort all requests */ > void fuse_abort_conn(struct fuse_conn *fc, bool is_abort); > --- a/fs/fuse/file.c > +++ b/fs/fuse/file.c > @@ -1487,6 +1487,7 @@ __acquires(fc->lock) > struct fuse_inode *fi = get_fuse_inode(req->inode); > struct fuse_write_in *inarg = &req->misc.write.in; > __u64 data_size = req->num_pages * PAGE_SIZE; > + bool queued; > > if (!fc->connected) > goto out_free; > @@ -1502,7 +1503,8 @@ __acquires(fc->lock) > > req->in.args[1].size = inarg->size; > fi->writectr++; > - fuse_request_send_background_nocheck(fc, req); > + queued = fuse_request_queue_background(fc, req); > + WARN_ON(!queued); > return; > > out_free: >
Hi, Miklos, should I resend the series with the patch you changed, or you are already taken it since there is your SoB? Thanks, Kirill On 26.09.2018 18:18, Kirill Tkhai wrote: > On 26.09.2018 15:25, Miklos Szeredi wrote: >> On Mon, Aug 27, 2018 at 06:29:56PM +0300, Kirill Tkhai wrote: >>> Currently, we take fc->lock there only to check for fc->connected. >>> But this flag is changed only on connection abort, which is very >>> rare operation. Good thing looks to make fuse_request_send_background() >>> faster, while fuse_abort_conn() slowler. >>> >>> So, we make fuse_request_send_background() lockless and mark >>> (fc->connected == 1) region as RCU-protected. Abort function >>> just uses synchronize_sched() to wait till all pending background >>> requests is being queued, and then makes ordinary abort. >>> >>> Note, that synchronize_sched() is used instead of synchronize_rcu(), >>> since we want to check for fc->connected without rcu_dereference() >>> in fuse_request_send_background() (i.e., not to add memory barriers >>> to this hot path). >> >> Apart from the inaccuracies in the above (_sched variant is for scheduling and >> NMI taking code; _sched variant requires rcu_dereference() as well; >> rcu_dereference() does not add barriers; rcu_dereference() is only for pointers, >> so we can't use it for an integer), > > Writing this I was inspired by expand_fdtable(). Yes, the description confuses, > and we don't need rcu_dereference() since we do not touch memory pointed by __rcu > pointer, we have no pointer at all. synchronize_sched() guarantees: > > On systems with more than one CPU, when synchronize_sched() returns, > each CPU is guaranteed to have executed a full memory barrier since the > end of its last RCU-sched read-side critical section whose beginning > preceded the call to synchronize_sched(). > > (and rcu_dereference() unfolds in smp_read_barrier_depends(), which I mean as > added barriers) > > But it does not so matter. I'm OK with the patch you updated. > >> wouldn't it be simpler to just use bg_lock >> for checking ->connected, and lock bg_lock (as well as fc->lock) when setting >> ->connected? >> >> Updated patch below (untested). > > Tested it. Works for me. > > Thanks, > Kirill > >> >> --- >> Subject: fuse: do not take fc->lock in fuse_request_send_background() >> From: Kirill Tkhai <ktkhai@virtuozzo.com> >> Date: Mon, 27 Aug 2018 18:29:56 +0300 >> >> Currently, we take fc->lock there only to check for fc->connected. >> But this flag is changed only on connection abort, which is very >> rare operation. >> >> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> >> Signed-off-by: Miklos Szeredi <mszeredi@redhat.com> >> --- >> fs/fuse/dev.c | 46 +++++++++++++++++++++++----------------------- >> fs/fuse/file.c | 4 +++- >> fs/fuse/fuse_i.h | 4 +--- >> 3 files changed, 27 insertions(+), 27 deletions(-) >> >> --- a/fs/fuse/dev.c >> +++ b/fs/fuse/dev.c >> @@ -574,42 +574,38 @@ ssize_t fuse_simple_request(struct fuse_ >> return ret; >> } >> >> -/* >> - * Called under fc->lock >> - * >> - * fc->connected must have been checked previously >> - */ >> -void fuse_request_send_background_nocheck(struct fuse_conn *fc, >> - struct fuse_req *req) >> +bool fuse_request_queue_background(struct fuse_conn *fc, struct fuse_req *req) >> { >> - BUG_ON(!test_bit(FR_BACKGROUND, &req->flags)); >> + bool queued = false; >> + >> + WARN_ON(!test_bit(FR_BACKGROUND, &req->flags)); >> if (!test_bit(FR_WAITING, &req->flags)) { >> __set_bit(FR_WAITING, &req->flags); >> atomic_inc(&fc->num_waiting); >> } >> __set_bit(FR_ISREPLY, &req->flags); >> spin_lock(&fc->bg_lock); >> - fc->num_background++; >> - if (fc->num_background == fc->max_background) >> - fc->blocked = 1; >> - if (fc->num_background == fc->congestion_threshold && fc->sb) { >> - set_bdi_congested(fc->sb->s_bdi, BLK_RW_SYNC); >> - set_bdi_congested(fc->sb->s_bdi, BLK_RW_ASYNC); >> + if (likely(fc->connected)) { >> + fc->num_background++; >> + if (fc->num_background == fc->max_background) >> + fc->blocked = 1; >> + if (fc->num_background == fc->congestion_threshold && fc->sb) { >> + set_bdi_congested(fc->sb->s_bdi, BLK_RW_SYNC); >> + set_bdi_congested(fc->sb->s_bdi, BLK_RW_ASYNC); >> + } >> + list_add_tail(&req->list, &fc->bg_queue); >> + flush_bg_queue(fc); >> + queued = true; >> } >> - list_add_tail(&req->list, &fc->bg_queue); >> - flush_bg_queue(fc); >> spin_unlock(&fc->bg_lock); >> + >> + return queued; >> } >> >> void fuse_request_send_background(struct fuse_conn *fc, struct fuse_req *req) >> { >> - BUG_ON(!req->end); >> - spin_lock(&fc->lock); >> - if (fc->connected) { >> - fuse_request_send_background_nocheck(fc, req); >> - spin_unlock(&fc->lock); >> - } else { >> - spin_unlock(&fc->lock); >> + WARN_ON(!req->end); >> + if (!fuse_request_queue_background(fc, req)) { >> req->out.h.error = -ENOTCONN; >> req->end(fc, req); >> fuse_put_request(fc, req); >> @@ -2112,7 +2108,11 @@ void fuse_abort_conn(struct fuse_conn *f >> struct fuse_req *req, *next; >> LIST_HEAD(to_end); >> >> + /* Background queuing checks fc->connected under bg_lock */ >> + spin_lock(&fc->bg_lock); >> fc->connected = 0; >> + spin_unlock(&fc->bg_lock); >> + >> fc->aborted = is_abort; >> fuse_set_initialized(fc); >> list_for_each_entry(fud, &fc->devices, entry) { >> --- a/fs/fuse/fuse_i.h >> +++ b/fs/fuse/fuse_i.h >> @@ -863,9 +863,7 @@ ssize_t fuse_simple_request(struct fuse_ >> * Send a request in the background >> */ >> void fuse_request_send_background(struct fuse_conn *fc, struct fuse_req *req); >> - >> -void fuse_request_send_background_nocheck(struct fuse_conn *fc, >> - struct fuse_req *req); >> +bool fuse_request_queue_background(struct fuse_conn *fc, struct fuse_req *req); >> >> /* Abort all requests */ >> void fuse_abort_conn(struct fuse_conn *fc, bool is_abort); >> --- a/fs/fuse/file.c >> +++ b/fs/fuse/file.c >> @@ -1487,6 +1487,7 @@ __acquires(fc->lock) >> struct fuse_inode *fi = get_fuse_inode(req->inode); >> struct fuse_write_in *inarg = &req->misc.write.in; >> __u64 data_size = req->num_pages * PAGE_SIZE; >> + bool queued; >> >> if (!fc->connected) >> goto out_free; >> @@ -1502,7 +1503,8 @@ __acquires(fc->lock) >> >> req->in.args[1].size = inarg->size; >> fi->writectr++; >> - fuse_request_send_background_nocheck(fc, req); >> + queued = fuse_request_queue_background(fc, req); >> + WARN_ON(!queued); >> return; >> >> out_free: >>
On Thu, Sep 27, 2018 at 10:37 AM, Kirill Tkhai <ktkhai@virtuozzo.com> wrote: > Hi, Miklos, > > should I resend the series with the patch you changed, > or you are already taken it since there is your SoB? I already took that series. Thanks, Miklos
diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index 9690e7c79df7..064a65ca7283 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -575,8 +575,6 @@ ssize_t fuse_simple_request(struct fuse_conn *fc, struct fuse_args *args) } /* - * Called under fc->lock - * * fc->connected must have been checked previously */ void fuse_request_send_background_nocheck(struct fuse_conn *fc, @@ -604,12 +602,12 @@ void fuse_request_send_background_nocheck(struct fuse_conn *fc, void fuse_request_send_background(struct fuse_conn *fc, struct fuse_req *req) { BUG_ON(!req->end); - spin_lock(&fc->lock); + rcu_read_lock_sched(); if (fc->connected) { fuse_request_send_background_nocheck(fc, req); - spin_unlock(&fc->lock); + rcu_read_unlock_sched(); } else { - spin_unlock(&fc->lock); + rcu_read_unlock_sched(); req->out.h.error = -ENOTCONN; req->end(fc, req); fuse_put_request(fc, req); @@ -2107,6 +2105,13 @@ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort) LIST_HEAD(to_end); fc->connected = 0; + fc->aborting = true; + spin_unlock(&fc->lock); + + /* Propagate fc->connected */ + synchronize_sched(); + + spin_lock(&fc->lock); fc->aborted = is_abort; fuse_set_initialized(fc); list_for_each_entry(fud, &fc->devices, entry) { @@ -2145,12 +2150,14 @@ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort) spin_unlock(&fiq->waitq.lock); kill_fasync(&fiq->fasync, SIGIO, POLL_IN); end_polls(fc); + fc->aborting = false; wake_up_all(&fc->blocked_waitq); spin_unlock(&fc->lock); end_requests(fc, &to_end); } else { spin_unlock(&fc->lock); + wait_event(fc->blocked_waitq, !fc->aborting); } } EXPORT_SYMBOL_GPL(fuse_abort_conn); diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h index b2c6a1cd1e2c..b9a991b42c88 100644 --- a/fs/fuse/fuse_i.h +++ b/fs/fuse/fuse_i.h @@ -523,6 +523,9 @@ struct fuse_conn { abort and device release */ unsigned connected; + /** Connection is now aborting */ + bool aborting; + /** Connection aborted via sysfs */ bool aborted; diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c index a4285ec7c248..9612eb4bc609 100644 --- a/fs/fuse/inode.c +++ b/fs/fuse/inode.c @@ -624,6 +624,7 @@ void fuse_conn_init(struct fuse_conn *fc, struct user_namespace *user_ns) fc->blocked = 0; fc->initialized = 0; fc->connected = 1; + fc->aborting = false; fc->attr_version = 1; get_random_bytes(&fc->scramble_key, sizeof(fc->scramble_key)); fc->pid_ns = get_pid_ns(task_active_pid_ns(current));
Currently, we take fc->lock there only to check for fc->connected. But this flag is changed only on connection abort, which is very rare operation. Good thing looks to make fuse_request_send_background() faster, while fuse_abort_conn() slowler. So, we make fuse_request_send_background() lockless and mark (fc->connected == 1) region as RCU-protected. Abort function just uses synchronize_sched() to wait till all pending background requests is being queued, and then makes ordinary abort. Note, that synchronize_sched() is used instead of synchronize_rcu(), since we want to check for fc->connected without rcu_dereference() in fuse_request_send_background() (i.e., not to add memory barriers to this hot path). Also, note fuse_conn::aborting field is introduced. It's aimed to synchronize two sequential aborts executing in parallel: we do not want the secondly called abort finishes before the first one is done, after it sees fc->connected has already became 0. Our storage test shows performance increase on parallel read and write with the patchset (7 test runs, 300 seconds of execution of each, aio): ./io_iops --read --write --iops -u 48g -s 4k -p 24 -n 24 --aio -q 128 -t 300 -f /mnt/vstorage/file Before (iops): 25721.58203 (worst) 26179.43359 26092.58594 25789.96484 26238.63477 (best) 25985.88867 25987.66406 After (iops): 26852.27539 27547.60742 26322.29688 27643.33398 (best) 26753.10547 26157.91016 (worst) 26694.63477 All *after* runs are better than any *before* run, except of one. The best *after* run is 5% better than the best *before* run. The worst *after* run is 1.7% better then the worst *before* run. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> --- fs/fuse/dev.c | 17 ++++++++++++----- fs/fuse/fuse_i.h | 3 +++ fs/fuse/inode.c | 1 + 3 files changed, 16 insertions(+), 5 deletions(-)