diff mbox

[1/5] percpu-refcount: Introduce percpu_ref_switch_to_atomic_nowait()

Message ID 20170908235226.26622-2-bart.vanassche@wdc.com (mailing list archive)
State New, archived
Headers show

Commit Message

Bart Van Assche Sept. 8, 2017, 11:52 p.m. UTC
The blk-mq core keeps track of the number of request queue users
through q->q_usage_count. Make it possible to switch this counter
to atomic mode from the context of the block layer power management
code by introducing percpu_ref_switch_to_atomic_nowait().

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: NeilBrown <neilb@suse.com>
---
 include/linux/percpu-refcount.h |  1 +
 lib/percpu-refcount.c           | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

Comments

Johannes Thumshirn Sept. 11, 2017, 8:42 a.m. UTC | #1
On Fri, Sep 08, 2017 at 04:52:22PM -0700, Bart Van Assche wrote:
> +/**
> + * percpu_ref_switch_to_nowait - switch a percpu_ref to atomic mode

This should probably read percpu_ref_switch_to_atomic_nowait, shouldn't it?

> +void percpu_ref_switch_to_atomic_nowait(struct percpu_ref *ref)
Tejun Heo Sept. 11, 2017, 1:13 p.m. UTC | #2
On Fri, Sep 08, 2017 at 04:52:22PM -0700, Bart Van Assche wrote:
> The blk-mq core keeps track of the number of request queue users
> through q->q_usage_count. Make it possible to switch this counter
> to atomic mode from the context of the block layer power management
> code by introducing percpu_ref_switch_to_atomic_nowait().

Do you ever have to switch back?  If so, how do you know whether the
previous transition finished?

Thanks.
Bart Van Assche Sept. 11, 2017, 4:09 p.m. UTC | #3
On Mon, 2017-09-11 at 06:13 -0700, Tejun Heo wrote:
> On Fri, Sep 08, 2017 at 04:52:22PM -0700, Bart Van Assche wrote:

> > The blk-mq core keeps track of the number of request queue users

> > through q->q_usage_count. Make it possible to switch this counter

> > to atomic mode from the context of the block layer power management

> > code by introducing percpu_ref_switch_to_atomic_nowait().

> 

> Do you ever have to switch back?  If so, how do you know whether the

> previous transition finished?


Hello Tejun,

The way I would like to use this function is to check for completion of the
transition by calling percpu_ref_is_zero(). That function namely not only
checks the value of the refcount but also whether it is in atomic mode. See
also "[PATCH 5/5] blk-mq: Implement power management support"
(https://www.spinics.net/lists/linux-block/msg17143.html). Do you think this
will work?

Thanks,

Bart.
Bart Van Assche Sept. 11, 2017, 4:10 p.m. UTC | #4
On Mon, 2017-09-11 at 10:42 +0200, Johannes Thumshirn wrote:
> On Fri, Sep 08, 2017 at 04:52:22PM -0700, Bart Van Assche wrote:

> > +/**

> > + * percpu_ref_switch_to_nowait - switch a percpu_ref to atomic mode

> 

> This should probably read percpu_ref_switch_to_atomic_nowait, shouldn't it?


Indeed. Thanks for having caught and reported this. I will address this
when I resubmit this patch series.

Bart.
Tejun Heo Sept. 11, 2017, 4:37 p.m. UTC | #5
Hello,

On Mon, Sep 11, 2017 at 04:09:12PM +0000, Bart Van Assche wrote:
> On Mon, 2017-09-11 at 06:13 -0700, Tejun Heo wrote:
> > On Fri, Sep 08, 2017 at 04:52:22PM -0700, Bart Van Assche wrote:
> > > The blk-mq core keeps track of the number of request queue users
> > > through q->q_usage_count. Make it possible to switch this counter
> > > to atomic mode from the context of the block layer power management
> > > code by introducing percpu_ref_switch_to_atomic_nowait().
> > 
> > Do you ever have to switch back?  If so, how do you know whether the
> > previous transition finished?
> 
> The way I would like to use this function is to check for completion of the
> transition by calling percpu_ref_is_zero(). That function namely not only
> checks the value of the refcount but also whether it is in atomic mode. See
> also "[PATCH 5/5] blk-mq: Implement power management support"
> (https://www.spinics.net/lists/linux-block/msg17143.html). Do you think this
> will work?

Probably but that sounds really hairy.  I'd much prefer if it could be
done through the usual kill / confirm / release and re-init.  That's
not a possibility?

Thanks.
Bart Van Assche Sept. 11, 2017, 4:55 p.m. UTC | #6
On Mon, 2017-09-11 at 09:37 -0700, tj@kernel.org wrote:
> Hello,

> 

> On Mon, Sep 11, 2017 at 04:09:12PM +0000, Bart Van Assche wrote:

> > On Mon, 2017-09-11 at 06:13 -0700, Tejun Heo wrote:

> > > On Fri, Sep 08, 2017 at 04:52:22PM -0700, Bart Van Assche wrote:

> > > > The blk-mq core keeps track of the number of request queue users

> > > > through q->q_usage_count. Make it possible to switch this counter

> > > > to atomic mode from the context of the block layer power management

> > > > code by introducing percpu_ref_switch_to_atomic_nowait().

> > > 

> > > Do you ever have to switch back?  If so, how do you know whether the

> > > previous transition finished?

> > 

> > The way I would like to use this function is to check for completion of the

> > transition by calling percpu_ref_is_zero(). That function namely not only

> > checks the value of the refcount but also whether it is in atomic mode. See

> > also "[PATCH 5/5] blk-mq: Implement power management support"

> > (https://www.spinics.net/lists/linux-block/msg17143.html). Do you think this

> > will work?

> 

> Probably but that sounds really hairy.  I'd much prefer if it could be

> done through the usual kill / confirm / release and re-init.  That's

> not a possibility?


Hello Tejun,

Modifying this patch series such that it uses the confirmation mechanism
when switching from per-cpu to atomic mode should be possible. However, the
wait_event_lock_irq() call in __percpu_ref_switch_mode() is annoying when
calling percpu_ref_kill_and_confirm() from atomic context. How about adding
a nowait version of percpu_ref_kill_and_confirm() that returns a boolean
indicating whether or not this function succeeded?

Thanks,

Bart.
Tejun Heo Sept. 11, 2017, 5:20 p.m. UTC | #7
Hey,

On Mon, Sep 11, 2017 at 04:55:59PM +0000, Bart Van Assche wrote:
> > Probably but that sounds really hairy.  I'd much prefer if it could be
> > done through the usual kill / confirm / release and re-init.  That's
> > not a possibility?
> 
> Modifying this patch series such that it uses the confirmation mechanism
> when switching from per-cpu to atomic mode should be possible. However, the
> wait_event_lock_irq() call in __percpu_ref_switch_mode() is annoying when
> calling percpu_ref_kill_and_confirm() from atomic context. How about adding
> a nowait version of percpu_ref_kill_and_confirm() that returns a boolean
> indicating whether or not this function succeeded?

Hmm... so if you know that you won't try to switch again while
switching is in progress, it should be fine.  That can't be
guaranteed?

Thanks.
Bart Van Assche Sept. 11, 2017, 5:29 p.m. UTC | #8
On Mon, 2017-09-11 at 10:20 -0700, Tejun Heo wrote:
> Hmm... so if you know that you won't try to switch again while

> switching is in progress, it should be fine.  That can't be

> guaranteed?


Hello Tejun,

After having had another look at the power management documentation I see
now that power management callbacks are only called from atomic context if
pm_runtime_irq_safe(dev) is called first. Neither the block layer core nor
the SCSI core call this function so I will try to rework this patch series
without modifying the percpu-refcount implementation.

Thanks,

Bart.
diff mbox

Patch

diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
index c13dceb87b60..0d4bfbb392d7 100644
--- a/include/linux/percpu-refcount.h
+++ b/include/linux/percpu-refcount.h
@@ -100,6 +100,7 @@  void percpu_ref_exit(struct percpu_ref *ref);
 void percpu_ref_switch_to_atomic(struct percpu_ref *ref,
 				 percpu_ref_func_t *confirm_switch);
 void percpu_ref_switch_to_atomic_sync(struct percpu_ref *ref);
+void percpu_ref_switch_to_atomic_nowait(struct percpu_ref *ref);
 void percpu_ref_switch_to_percpu(struct percpu_ref *ref);
 void percpu_ref_kill_and_confirm(struct percpu_ref *ref,
 				 percpu_ref_func_t *confirm_kill);
diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
index fe03c6d52761..cf9152ff0892 100644
--- a/lib/percpu-refcount.c
+++ b/lib/percpu-refcount.c
@@ -277,6 +277,27 @@  void percpu_ref_switch_to_atomic_sync(struct percpu_ref *ref)
 }
 EXPORT_SYMBOL_GPL(percpu_ref_switch_to_atomic_sync);
 
+/**
+ * percpu_ref_switch_to_nowait - switch a percpu_ref to atomic mode
+ * @ref: percpu_ref to switch to atomic mode
+ *
+ * Schedule switching of @ref to atomic mode. All its percpu counts will be
+ * collected to the main atomic counter. @ref will stay in atomic mode across
+ * kill/reinit cycles until percpu_ref_switch_to_percpu() is called.
+ *
+ * This function does not block and can be called from any context.
+ */
+void percpu_ref_switch_to_atomic_nowait(struct percpu_ref *ref)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&percpu_ref_switch_lock, flags);
+	if (!ref->confirm_switch)
+		__percpu_ref_switch_to_atomic(ref, NULL);
+	spin_unlock_irqrestore(&percpu_ref_switch_lock, flags);
+}
+EXPORT_SYMBOL_GPL(percpu_ref_switch_to_atomic_nowait);
+
 /**
  * percpu_ref_switch_to_percpu - switch a percpu_ref to percpu mode
  * @ref: percpu_ref to switch to percpu mode