From patchwork Fri Sep 8 23:52:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 9945145 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 96F72602D7 for ; Fri, 8 Sep 2017 23:53:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 89F902898E for ; Fri, 8 Sep 2017 23:53:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7EBDE289C6; Fri, 8 Sep 2017 23:53:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 016F62898E for ; Fri, 8 Sep 2017 23:53:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757282AbdIHXxo (ORCPT ); Fri, 8 Sep 2017 19:53:44 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:20261 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757279AbdIHXxn (ORCPT ); Fri, 8 Sep 2017 19:53:43 -0400 X-IronPort-AV: E=Sophos;i="5.42,363,1500912000"; d="scan'208";a="145723565" Received: from sjappemgw11.hgst.com (HELO sjappemgw12.hgst.com) ([199.255.44.62]) by ob1.hgst.iphmx.com with ESMTP; 09 Sep 2017 08:12:39 +0800 Received: from thinkpad-bart.sdcorp.global.sandisk.com (HELO thinkpad-bart.int.fusionio.com) ([10.11.172.152]) by sjappemgw12.hgst.com with ESMTP; 08 Sep 2017 16:52:27 -0700 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Tejun Heo , Christoph Lameter , NeilBrown Subject: [PATCH 1/5] percpu-refcount: Introduce percpu_ref_switch_to_atomic_nowait() Date: Fri, 8 Sep 2017 16:52:22 -0700 Message-Id: <20170908235226.26622-2-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20170908235226.26622-1-bart.vanassche@wdc.com> References: <20170908235226.26622-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The blk-mq core keeps track of the number of request queue users through q->q_usage_count. Make it possible to switch this counter to atomic mode from the context of the block layer power management code by introducing percpu_ref_switch_to_atomic_nowait(). Signed-off-by: Bart Van Assche Cc: Tejun Heo Cc: Christoph Lameter Cc: NeilBrown --- include/linux/percpu-refcount.h | 1 + lib/percpu-refcount.c | 21 +++++++++++++++++++++ 2 files changed, 22 insertions(+) diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h index c13dceb87b60..0d4bfbb392d7 100644 --- a/include/linux/percpu-refcount.h +++ b/include/linux/percpu-refcount.h @@ -100,6 +100,7 @@ void percpu_ref_exit(struct percpu_ref *ref); void percpu_ref_switch_to_atomic(struct percpu_ref *ref, percpu_ref_func_t *confirm_switch); void percpu_ref_switch_to_atomic_sync(struct percpu_ref *ref); +void percpu_ref_switch_to_atomic_nowait(struct percpu_ref *ref); void percpu_ref_switch_to_percpu(struct percpu_ref *ref); void percpu_ref_kill_and_confirm(struct percpu_ref *ref, percpu_ref_func_t *confirm_kill); diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index fe03c6d52761..cf9152ff0892 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -277,6 +277,27 @@ void percpu_ref_switch_to_atomic_sync(struct percpu_ref *ref) } EXPORT_SYMBOL_GPL(percpu_ref_switch_to_atomic_sync); +/** + * percpu_ref_switch_to_nowait - switch a percpu_ref to atomic mode + * @ref: percpu_ref to switch to atomic mode + * + * Schedule switching of @ref to atomic mode. All its percpu counts will be + * collected to the main atomic counter. @ref will stay in atomic mode across + * kill/reinit cycles until percpu_ref_switch_to_percpu() is called. + * + * This function does not block and can be called from any context. + */ +void percpu_ref_switch_to_atomic_nowait(struct percpu_ref *ref) +{ + unsigned long flags; + + spin_lock_irqsave(&percpu_ref_switch_lock, flags); + if (!ref->confirm_switch) + __percpu_ref_switch_to_atomic(ref, NULL); + spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); +} +EXPORT_SYMBOL_GPL(percpu_ref_switch_to_atomic_nowait); + /** * percpu_ref_switch_to_percpu - switch a percpu_ref to percpu mode * @ref: percpu_ref to switch to percpu mode