From patchwork Fri Apr 29 13:35:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12832008 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98609C433F5 for ; Fri, 29 Apr 2022 13:36:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2CC466B0075; Fri, 29 Apr 2022 09:36:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 27B396B0078; Fri, 29 Apr 2022 09:36:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 11CB86B007B; Fri, 29 Apr 2022 09:36:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 046106B0075 for ; Fri, 29 Apr 2022 09:36:35 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id D576B1206A7 for ; Fri, 29 Apr 2022 13:36:34 +0000 (UTC) X-FDA: 79410016308.07.7A37E9F Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) by imf13.hostedemail.com (Postfix) with ESMTP id 92A9320070 for ; Fri, 29 Apr 2022 13:36:25 +0000 (UTC) Received: by mail-pg1-f181.google.com with SMTP id bg9so6539402pgb.9 for ; Fri, 29 Apr 2022 06:36:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ieDn4aiMaGLBWCIS3wtwm2RaJ5ulm380NWzDeauNbb4=; b=EBM/v5v9b/++wKJ7UyNtJG3hOZtf0D0P421yMRX4mbDxLWaLjOnVd44J8Kojpk1/Jx wCFrmBGQ/ceCjK2D4mHXGw0dSrIqtbGqjjBrZNAFATQx+fQ3H5noDNFQwBzJ7OOgtWB+ JVr1SHNAxpY01lGGs23Wp0T2G74lyX6XkV1W6acl+SBAclRUO5GvysEsfCuAuaRStabd GfI7JWXoZkXpXKsQkXGJI8UFEcFsomE5V4M3jZm36E83p3L+E9GJIPbxQ/45XJ5HkdU0 inXCNH8Zsk7/VEzeqUaBbswbkgOHbaJakh63ipxlq9pP6hJllz+gzTUgl2B+oN4/WO8g 3ANA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ieDn4aiMaGLBWCIS3wtwm2RaJ5ulm380NWzDeauNbb4=; b=nRAfvafWNwWLJEQR8CAmIKC4IFZLJ16ZhFik8tVFH5gaOJxwaRK2n/8X4BVdX4XZIJ jstVEOu0c+utvu/xnYdxx+Jv4wM3nVZlR5KnGg/Sol4aSBZNCb1JHvhO8PMwJvpGgRoM gLAxDCfpQwYM9O0cpHvhvGFTm2DFddl93Ns5BD76XLsBDEfRSyNVcaf1bltlF0/2h6Qz cdnAyXPItNDzdoAM3DhcP+g11d73OJN2JQOvraaI+obIukZjbo3scxZP/c2affRDKEyx 8qSRN66RZpiHRBySoXhXU9u1yfXwUu8QiyyIKSxTZnL5h+QPtlnG7bxfKbqmebHpec5Y BlMA== X-Gm-Message-State: AOAM531P52TjXZysfhHeFpdLVRn9R7BmetZxaBac9SvVJiyGrdslzf1B z6ck9FvWTrgjW9sKc2qki9lH4Q== X-Google-Smtp-Source: ABdhPJxItc3PJijXr/92XujdPMZZoCsMTMRrNeqFfimvMzqakWNCQB/zCudyZqo7pQwxbtrJvVy/Xg== X-Received: by 2002:a65:6a56:0:b0:3aa:49b8:ee77 with SMTP id o22-20020a656a56000000b003aa49b8ee77mr32125077pgu.19.1651239392962; Fri, 29 Apr 2022 06:36:32 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.240]) by smtp.gmail.com with ESMTPSA id m8-20020a17090a414800b001d81a30c437sm10681977pjg.50.2022.04.29.06.36.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Apr 2022 06:36:32 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, tglx@linutronix.de, kirill.shutemov@linux.intel.com, mika.penttila@nextfour.com, david@redhat.com, jgg@nvidia.com, tj@kernel.org, dennis@kernel.org, ming.lei@redhat.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, songmuchun@bytedance.com, zhouchengming@bytedance.com, Qi Zheng Subject: [RFC PATCH 03/18] percpu_ref: make percpu_ref_switch_lock per percpu_ref Date: Fri, 29 Apr 2022 21:35:37 +0800 Message-Id: <20220429133552.33768-4-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20220429133552.33768-1-zhengqi.arch@bytedance.com> References: <20220429133552.33768-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b="EBM/v5v9"; spf=pass (imf13.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.215.181 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 92A9320070 X-Stat-Signature: y3azscmgy9w48frkhwpe9pqyxx3ra4nn X-HE-Tag: 1651239385-548280 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, percpu_ref uses the global percpu_ref_switch_lock to protect the mode switching operation. When multiple percpu_ref perform mode switching at the same time, the lock may become a performance bottleneck. This patch introduces per percpu_ref percpu_ref_switch_lock to fixes this situation. Signed-off-by: Qi Zheng --- include/linux/percpu-refcount.h | 2 ++ lib/percpu-refcount.c | 30 +++++++++++++++--------------- 2 files changed, 17 insertions(+), 15 deletions(-) diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h index 75844939a965..eb8695e578fd 100644 --- a/include/linux/percpu-refcount.h +++ b/include/linux/percpu-refcount.h @@ -110,6 +110,8 @@ struct percpu_ref { */ unsigned long percpu_count_ptr; + spinlock_t percpu_ref_switch_lock; + /* * 'percpu_ref' is often embedded into user structure, and only * 'percpu_count_ptr' is required in fast path, move other fields diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index 3a8906715e09..4336fd1bd77a 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -36,7 +36,6 @@ #define PERCPU_COUNT_BIAS (1LU << (BITS_PER_LONG - 1)) -static DEFINE_SPINLOCK(percpu_ref_switch_lock); static DECLARE_WAIT_QUEUE_HEAD(percpu_ref_switch_waitq); static unsigned long __percpu *percpu_count_ptr(struct percpu_ref *ref) @@ -95,6 +94,7 @@ int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release, start_count++; atomic_long_set(&data->count, start_count); + spin_lock_init(&ref->percpu_ref_switch_lock); data->release = release; data->confirm_switch = NULL; @@ -137,11 +137,11 @@ void percpu_ref_exit(struct percpu_ref *ref) if (!data) return; - spin_lock_irqsave(&percpu_ref_switch_lock, flags); + spin_lock_irqsave(&ref->percpu_ref_switch_lock, flags); ref->percpu_count_ptr |= atomic_long_read(&ref->data->count) << __PERCPU_REF_FLAG_BITS; ref->data = NULL; - spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); + spin_unlock_irqrestore(&ref->percpu_ref_switch_lock, flags); kfree(data); } @@ -287,7 +287,7 @@ static void __percpu_ref_switch_mode(struct percpu_ref *ref, { struct percpu_ref_data *data = ref->data; - lockdep_assert_held(&percpu_ref_switch_lock); + lockdep_assert_held(&ref->percpu_ref_switch_lock); /* * If the previous ATOMIC switching hasn't finished yet, wait for @@ -295,7 +295,7 @@ static void __percpu_ref_switch_mode(struct percpu_ref *ref, * isn't in progress, this function can be called from any context. */ wait_event_lock_irq(percpu_ref_switch_waitq, !data->confirm_switch, - percpu_ref_switch_lock); + ref->percpu_ref_switch_lock); if (data->force_atomic || percpu_ref_is_dying(ref)) __percpu_ref_switch_to_atomic(ref, confirm_switch, sync); @@ -329,12 +329,12 @@ void percpu_ref_switch_to_atomic(struct percpu_ref *ref, { unsigned long flags; - spin_lock_irqsave(&percpu_ref_switch_lock, flags); + spin_lock_irqsave(&ref->percpu_ref_switch_lock, flags); ref->data->force_atomic = true; __percpu_ref_switch_mode(ref, confirm_switch, sync); - spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); + spin_unlock_irqrestore(&ref->percpu_ref_switch_lock, flags); } EXPORT_SYMBOL_GPL(percpu_ref_switch_to_atomic); @@ -376,12 +376,12 @@ void percpu_ref_switch_to_percpu(struct percpu_ref *ref) { unsigned long flags; - spin_lock_irqsave(&percpu_ref_switch_lock, flags); + spin_lock_irqsave(&ref->percpu_ref_switch_lock, flags); ref->data->force_atomic = false; __percpu_ref_switch_mode(ref, NULL, false); - spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); + spin_unlock_irqrestore(&ref->percpu_ref_switch_lock, flags); } EXPORT_SYMBOL_GPL(percpu_ref_switch_to_percpu); @@ -407,7 +407,7 @@ void percpu_ref_kill_and_confirm(struct percpu_ref *ref, { unsigned long flags; - spin_lock_irqsave(&percpu_ref_switch_lock, flags); + spin_lock_irqsave(&ref->percpu_ref_switch_lock, flags); WARN_ONCE(percpu_ref_is_dying(ref), "%s called more than once on %ps!", __func__, @@ -417,7 +417,7 @@ void percpu_ref_kill_and_confirm(struct percpu_ref *ref, __percpu_ref_switch_mode(ref, confirm_kill, false); percpu_ref_put(ref); - spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); + spin_unlock_irqrestore(&ref->percpu_ref_switch_lock, flags); } EXPORT_SYMBOL_GPL(percpu_ref_kill_and_confirm); @@ -438,12 +438,12 @@ bool percpu_ref_is_zero(struct percpu_ref *ref) return false; /* protect us from being destroyed */ - spin_lock_irqsave(&percpu_ref_switch_lock, flags); + spin_lock_irqsave(&ref->percpu_ref_switch_lock, flags); if (ref->data) count = atomic_long_read(&ref->data->count); else count = ref->percpu_count_ptr >> __PERCPU_REF_FLAG_BITS; - spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); + spin_unlock_irqrestore(&ref->percpu_ref_switch_lock, flags); return count == 0; } @@ -487,7 +487,7 @@ void percpu_ref_resurrect(struct percpu_ref *ref) unsigned long __percpu *percpu_count; unsigned long flags; - spin_lock_irqsave(&percpu_ref_switch_lock, flags); + spin_lock_irqsave(&ref->percpu_ref_switch_lock, flags); WARN_ON_ONCE(!percpu_ref_is_dying(ref)); WARN_ON_ONCE(__ref_is_percpu(ref, &percpu_count)); @@ -496,6 +496,6 @@ void percpu_ref_resurrect(struct percpu_ref *ref) percpu_ref_get(ref); __percpu_ref_switch_mode(ref, NULL, false); - spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); + spin_unlock_irqrestore(&ref->percpu_ref_switch_lock, flags); } EXPORT_SYMBOL_GPL(percpu_ref_resurrect);