Message ID | 1631548441-2784-1-git-send-email-wang.yong12@zte.com.cn (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v1] vmpressure: wake up work only when there is registration event | expand |
On Mon 13-09-21 08:54:01, yongw.pur@gmail.com wrote: > From: wangyong <wang.yong12@zte.com.cn> > > Use the global variable num_events to record the number of vmpressure > events registered by the system, and wake up work only when there is > registration event. > Usually, the vmpressure event is not registered in the system, this patch > can avoid waking up work and doing nothing. How much of an improvement does this bring? > Refer to Michal Hocko's suggestion: > https://lore.kernel.org/linux-mm/YR%2FNRJEhPKRQ1r22@dhcp22.suse.cz/ let me also point out that we do have means to implement conditional branches with a zero overhead. Have a look at static branches. > Tested-by: Zeal Robot <zealci@zte.com.cn> > Signed-off-by: wangyong <wang.yong12@zte.com.cn> > --- > mm/vmpressure.c | 9 ++++++++- > 1 file changed, 8 insertions(+), 1 deletion(-) > > diff --git a/mm/vmpressure.c b/mm/vmpressure.c > index 76518e4..dfac76b 100644 > --- a/mm/vmpressure.c > +++ b/mm/vmpressure.c > @@ -67,6 +67,11 @@ static const unsigned int vmpressure_level_critical = 95; > */ > static const unsigned int vmpressure_level_critical_prio = ilog2(100 / 10); > > +/* > + * Count the number of vmpressure events registered in the system. > + */ > +static atomic_t num_events = ATOMIC_INIT(0); > + > static struct vmpressure *work_to_vmpressure(struct work_struct *work) > { > return container_of(work, struct vmpressure, work); > @@ -277,7 +282,7 @@ void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree, > vmpr->tree_reclaimed += reclaimed; > spin_unlock(&vmpr->sr_lock); > > - if (scanned < vmpressure_win) > + if (scanned < vmpressure_win || atomic_read(&num_events) == 0) > return; > schedule_work(&vmpr->work); > } else { This is a very odd place to put the check on. It is past locks being held and schedule_work on it's own is unlikely to provide a big overhead. I would have expected to implement the check at the very beginning of vmpressure()
diff --git a/mm/vmpressure.c b/mm/vmpressure.c index 76518e4..dfac76b 100644 --- a/mm/vmpressure.c +++ b/mm/vmpressure.c @@ -67,6 +67,11 @@ static const unsigned int vmpressure_level_critical = 95; */ static const unsigned int vmpressure_level_critical_prio = ilog2(100 / 10); +/* + * Count the number of vmpressure events registered in the system. + */ +static atomic_t num_events = ATOMIC_INIT(0); + static struct vmpressure *work_to_vmpressure(struct work_struct *work) { return container_of(work, struct vmpressure, work); @@ -277,7 +282,7 @@ void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree, vmpr->tree_reclaimed += reclaimed; spin_unlock(&vmpr->sr_lock); - if (scanned < vmpressure_win) + if (scanned < vmpressure_win || atomic_read(&num_events) == 0) return; schedule_work(&vmpr->work); } else { @@ -407,6 +412,7 @@ int vmpressure_register_event(struct mem_cgroup *memcg, mutex_lock(&vmpr->events_lock); list_add(&ev->node, &vmpr->events); mutex_unlock(&vmpr->events_lock); + atomic_add(1, &num_events); ret = 0; out: kfree(spec_orig); @@ -435,6 +441,7 @@ void vmpressure_unregister_event(struct mem_cgroup *memcg, if (ev->efd != eventfd) continue; list_del(&ev->node); + atomic_sub(1, &num_events); kfree(ev); break; }