diff mbox series

[RFC] mm: swap: remove lru drain waiters

Message ID 20200601143734.9572-1-hdanton@sina.com (mailing list archive)
State New, archived
Headers show
Series [RFC] mm: swap: remove lru drain waiters | expand

Commit Message

Hillf Danton June 1, 2020, 2:37 p.m. UTC
After updating the lru drain sequence, new comers avoid waiting for
the current drainer, because he is flushing works on each online CPU,
by trying to lock the mutex; the drainer OTOH tries to do works for
those who fail to acquire the lock by checking the lru drain sequence
after releasing lock.

See eef1a429f234 ("mm/swap.c: piggyback lru_add_drain_all() calls")
for reasons why we can skip waiting for the lock.

The memory barriers around the sequence and the lock come together
to remove waiters without their drain works bandoned.

Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Hillf Danton <hdanton@sina.com>
---
This is inspired by one of the works from Sebastian.

--

Comments

Konstantin Khlebnikov June 1, 2020, 3:41 p.m. UTC | #1
On 01/06/2020 17.37, Hillf Danton wrote:
> 
> After updating the lru drain sequence, new comers avoid waiting for
> the current drainer, because he is flushing works on each online CPU,
> by trying to lock the mutex; the drainer OTOH tries to do works for
> those who fail to acquire the lock by checking the lru drain sequence
> after releasing lock.
> 
> See eef1a429f234 ("mm/swap.c: piggyback lru_add_drain_all() calls")
> for reasons why we can skip waiting for the lock.

That patch tells nothing about such change in behaviour.

Callers like invalidate_bdev() really need synchronous drain to be sure
that pages have no extra reference from per-cpu vectors.

> 
> The memory barriers around the sequence and the lock come together
> to remove waiters without their drain works bandoned.
> 
> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
> Signed-off-by: Hillf Danton <hdanton@sina.com>
> ---
> This is inspired by one of the works from Sebastian.
> 
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -714,10 +714,11 @@ static void lru_add_drain_per_cpu(struct
>    */
>   void lru_add_drain_all(void)
>   {
> -	static seqcount_t seqcount = SEQCNT_ZERO(seqcount);
> +	static unsigned int lru_drain_seq;
>   	static DEFINE_MUTEX(lock);
>   	static struct cpumask has_work;
> -	int cpu, seq;
> +	int cpu;
> +	unsigned int seq;
>   
>   	/*
>   	 * Make sure nobody triggers this path before mm_percpu_wq is fully
> @@ -726,18 +727,16 @@ void lru_add_drain_all(void)
>   	if (WARN_ON(!mm_percpu_wq))
>   		return;
>   
> -	seq = raw_read_seqcount_latch(&seqcount);
> +	lru_drain_seq++;
> +	smp_mb();
>   
> -	mutex_lock(&lock);
> +more_work:
>   
> -	/*
> -	 * Piggyback on drain started and finished while we waited for lock:
> -	 * all pages pended at the time of our enter were drained from vectors.
> -	 */
> -	if (__read_seqcount_retry(&seqcount, seq))
> -		goto done;
> +	if (!mutex_trylock(&lock))
> +		return;
>   
> -	raw_write_seqcount_latch(&seqcount);
> +	smp_mb();
> +	seq = lru_drain_seq;
>   
>   	cpumask_clear(&has_work);
>   
> @@ -759,8 +758,11 @@ void lru_add_drain_all(void)
>   	for_each_cpu(cpu, &has_work)
>   		flush_work(&per_cpu(lru_add_drain_work, cpu));
>   
> -done:
>   	mutex_unlock(&lock);
> +
> +	smp_mb();
> +	if (seq != lru_drain_seq)
> +		goto more_work;
>   }
>   #else
>   void lru_add_drain_all(void)
> --
>
Sebastian Andrzej Siewior June 3, 2020, 8:21 a.m. UTC | #2
On 2020-06-01 22:37:34 [+0800], Hillf Danton wrote:
> 
> After updating the lru drain sequence, new comers avoid waiting for
> the current drainer, because he is flushing works on each online CPU,
> by trying to lock the mutex; the drainer OTOH tries to do works for
> those who fail to acquire the lock by checking the lru drain sequence
> after releasing lock.
> 
> See eef1a429f234 ("mm/swap.c: piggyback lru_add_drain_all() calls")
> for reasons why we can skip waiting for the lock.
> 
> The memory barriers around the sequence and the lock come together
> to remove waiters without their drain works bandoned.
> 
> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
> Signed-off-by: Hillf Danton <hdanton@sina.com>
> ---
> This is inspired by one of the works from Sebastian.

Not me, it was Ahmed.

> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -714,10 +714,11 @@ static void lru_add_drain_per_cpu(struct
>   */
>  void lru_add_drain_all(void)
>  {
> -	static seqcount_t seqcount = SEQCNT_ZERO(seqcount);
> +	static unsigned int lru_drain_seq;
>  	static DEFINE_MUTEX(lock);
>  	static struct cpumask has_work;
> -	int cpu, seq;
> +	int cpu;
> +	unsigned int seq;
>  
>  	/*
>  	 * Make sure nobody triggers this path before mm_percpu_wq is fully
> @@ -726,18 +727,16 @@ void lru_add_drain_all(void)
>  	if (WARN_ON(!mm_percpu_wq))
>  		return;
>  
> -	seq = raw_read_seqcount_latch(&seqcount);
> +	lru_drain_seq++;
> +	smp_mb();
>  
> -	mutex_lock(&lock);
> +more_work:
>  
> -	/*
> -	 * Piggyback on drain started and finished while we waited for lock:
> -	 * all pages pended at the time of our enter were drained from vectors.
> -	 */
> -	if (__read_seqcount_retry(&seqcount, seq))
> -		goto done;
> +	if (!mutex_trylock(&lock))
> +		return;
>  
> -	raw_write_seqcount_latch(&seqcount);
> +	smp_mb();
> +	seq = lru_drain_seq;
>  
>  	cpumask_clear(&has_work);
>  
> @@ -759,8 +758,11 @@ void lru_add_drain_all(void)
>  	for_each_cpu(cpu, &has_work)
>  		flush_work(&per_cpu(lru_add_drain_work, cpu));
>  
> -done:
>  	mutex_unlock(&lock);
> +
> +	smp_mb();
> +	if (seq != lru_drain_seq)
> +		goto more_work;
>  }
>  #else
>  void lru_add_drain_all(void)
> --

Sebastian
Ahmed S. Darwish June 3, 2020, 10:24 a.m. UTC | #3
Hi Hillf,

For some reason, **all of your posts** from <hdanton@sina.com> do not
appear on lore.kernel.org.

Check, for example, https://lore.kernel.org/lkml/?q=hdanton%40sina.com,
where thread replies are there but not the actual posts.

Just wanted to let you know... Please continue below.

On Wed, Jun 03, 2020 at 10:21:45AM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-06-01 22:37:34 [+0800], Hillf Danton wrote:
> >
> > After updating the lru drain sequence, new comers avoid waiting for
> > the current drainer, because he is flushing works on each online CPU,
> > by trying to lock the mutex; the drainer OTOH tries to do works for
> > those who fail to acquire the lock by checking the lru drain sequence
> > after releasing lock.
> >
> > See eef1a429f234 ("mm/swap.c: piggyback lru_add_drain_all() calls")
> > for reasons why we can skip waiting for the lock.
> >
> > The memory barriers around the sequence and the lock come together
> > to remove waiters without their drain works bandoned.
> >
> > Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
> > Signed-off-by: Hillf Danton <hdanton@sina.com>
> > ---
> > This is inspired by one of the works from Sebastian.
>
> Not me, it was Ahmed.
>
> > --- a/mm/swap.c
> > +++ b/mm/swap.c
> > @@ -714,10 +714,11 @@ static void lru_add_drain_per_cpu(struct
> >   */
> >  void lru_add_drain_all(void)
> >  {
> > -	static seqcount_t seqcount = SEQCNT_ZERO(seqcount);
> > +	static unsigned int lru_drain_seq;
> >  	static DEFINE_MUTEX(lock);
> >  	static struct cpumask has_work;
> > -	int cpu, seq;
> > +	int cpu;
> > +	unsigned int seq;
> >
> >  	/*
> >  	 * Make sure nobody triggers this path before mm_percpu_wq is fully
> > @@ -726,18 +727,16 @@ void lru_add_drain_all(void)
> >  	if (WARN_ON(!mm_percpu_wq))
> >  		return;
> >
> > -	seq = raw_read_seqcount_latch(&seqcount);
> > +	lru_drain_seq++;
> > +	smp_mb();
> >
> > -	mutex_lock(&lock);
> > +more_work:
> >
> > -	/*
> > -	 * Piggyback on drain started and finished while we waited for lock:
> > -	 * all pages pended at the time of our enter were drained from vectors.
> > -	 */
> > -	if (__read_seqcount_retry(&seqcount, seq))
> > -		goto done;
> > +	if (!mutex_trylock(&lock))
> > +		return;
> >

The patch I've posted makes sure to preserve the existing draining
logic. It only fixes an erroneous usage of seqcount_t latching, plus a
memory barriers bugfix, found by John, and is to be included in v2:

    https://lkml.kernel.org/r/87y2pg9erj.fsf@vostro.fn.ogness.net

On the other hand, you're making the draining operation completely
asynchronous for a number of callers. This is such a huge change, and I
fail to see: 1) any rationale for it in the changelog, 2) whether it's
been verified that call-sites won't be affected.

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH
Hillf Danton June 3, 2020, 1:39 p.m. UTC | #4
On Wed, 3 Jun 2020 12:24:12 +0200 "Ahmed S. Darwish" wrote:
> On Wed, Jun 03, 2020 at 10:21:45AM +0200, Sebastian Andrzej Siewior wrote:
> > On 2020-06-01 22:37:34 [+0800], Hillf Danton wrote:
> > >
> > > After updating the lru drain sequence, new comers avoid waiting for
> > > the current drainer, because he is flushing works on each online CPU,
> > > by trying to lock the mutex; the drainer OTOH tries to do works for
> > > those who fail to acquire the lock by checking the lru drain sequence
> > > after releasing lock.
> > >
> > > See eef1a429f234 ("mm/swap.c: piggyback lru_add_drain_all() calls")
> > > for reasons why we can skip waiting for the lock.
> > >
> > > The memory barriers around the sequence and the lock come together
> > > to remove waiters without their drain works bandoned.
> > >
> > > Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > > Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>

      Cc: Ahmed S. Darwish <a.darwish@linutronix.de>

> > > Signed-off-by: Hillf Danton <hdanton@sina.com>
> > > ---
> > > This is inspired by one of the works from Sebastian.
> >
This is inspired by one of the works from Ahmed

  https://lore.kernel.org/linux-mm/20200519214547.352050-3-a.darwish@linutronix.de/

> >
> > > --- a/mm/swap.c
> > > +++ b/mm/swap.c
> > > @@ -714,10 +714,11 @@ static void lru_add_drain_per_cpu(struct
> > >   */
> > >  void lru_add_drain_all(void)
> > >  {
> > > -	static seqcount_t seqcount = SEQCNT_ZERO(seqcount);
> > > +	static unsigned int lru_drain_seq;
> > >  	static DEFINE_MUTEX(lock);
> > >  	static struct cpumask has_work;
> > > -	int cpu, seq;
> > > +	int cpu;
> > > +	unsigned int seq;
> > >
> > >  	/*
> > >  	 * Make sure nobody triggers this path before mm_percpu_wq is fully
> > > @@ -726,18 +727,16 @@ void lru_add_drain_all(void)
> > >  	if (WARN_ON(!mm_percpu_wq))
> > >  		return;
> > >
> > > -	seq = raw_read_seqcount_latch(&seqcount);
> > > +	lru_drain_seq++;
> > > +	smp_mb();
> > >
> > > -	mutex_lock(&lock);
> > > +more_work:
> > >
> > > -	/*
> > > -	 * Piggyback on drain started and finished while we waited for lock:
> > > -	 * all pages pended at the time of our enter were drained from vectors.
> > > -	 */
> > > -	if (__read_seqcount_retry(&seqcount, seq))
> > > -		goto done;
> > > +	if (!mutex_trylock(&lock))
> > > +		return;
> > >
> 
> The patch I've posted makes sure to preserve the existing draining
> logic. It only fixes an erroneous usage of seqcount_t latching, plus a
> memory barriers bugfix, found by John, and is to be included in v2:
> 
>     https://lkml.kernel.org/r/87y2pg9erj.fsf@vostro.fn.ogness.net

Thanks for your link.
> 
> On the other hand, you're making the draining operation completely
> asynchronous for a number of callers. This is such a huge change, and I
> fail to see: 1) any rationale for it in the changelog,

The changlog does not match the change. It is based on my understanding
that percpu pagevec is itself in nature async operation. The introduction
of seqcount casts a shaft of light on making lru drain async without any
individual drain work lost.

> 2) whether it's been verified that call-sites won't be affected.

No. It's 10x harder than s/lock/trylock/, you see.

Hillf
diff mbox series

Patch

--- a/mm/swap.c
+++ b/mm/swap.c
@@ -714,10 +714,11 @@  static void lru_add_drain_per_cpu(struct
  */
 void lru_add_drain_all(void)
 {
-	static seqcount_t seqcount = SEQCNT_ZERO(seqcount);
+	static unsigned int lru_drain_seq;
 	static DEFINE_MUTEX(lock);
 	static struct cpumask has_work;
-	int cpu, seq;
+	int cpu;
+	unsigned int seq;
 
 	/*
 	 * Make sure nobody triggers this path before mm_percpu_wq is fully
@@ -726,18 +727,16 @@  void lru_add_drain_all(void)
 	if (WARN_ON(!mm_percpu_wq))
 		return;
 
-	seq = raw_read_seqcount_latch(&seqcount);
+	lru_drain_seq++;
+	smp_mb();
 
-	mutex_lock(&lock);
+more_work:
 
-	/*
-	 * Piggyback on drain started and finished while we waited for lock:
-	 * all pages pended at the time of our enter were drained from vectors.
-	 */
-	if (__read_seqcount_retry(&seqcount, seq))
-		goto done;
+	if (!mutex_trylock(&lock))
+		return;
 
-	raw_write_seqcount_latch(&seqcount);
+	smp_mb();
+	seq = lru_drain_seq;
 
 	cpumask_clear(&has_work);
 
@@ -759,8 +758,11 @@  void lru_add_drain_all(void)
 	for_each_cpu(cpu, &has_work)
 		flush_work(&per_cpu(lru_add_drain_work, cpu));
 
-done:
 	mutex_unlock(&lock);
+
+	smp_mb();
+	if (seq != lru_drain_seq)
+		goto more_work;
 }
 #else
 void lru_add_drain_all(void)