From patchwork Tue Feb 18 03:14:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: JP Kobryn X-Patchwork-Id: 13978857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28999C021AA for ; Tue, 18 Feb 2025 03:15:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 574582800C0; Mon, 17 Feb 2025 22:15:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3EA3D2800A4; Mon, 17 Feb 2025 22:15:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 216132800C0; Mon, 17 Feb 2025 22:15:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id F0D112800A4 for ; Mon, 17 Feb 2025 22:15:16 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A5ED9B5404 for ; Tue, 18 Feb 2025 03:15:16 +0000 (UTC) X-FDA: 83131599432.15.B621796 Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) by imf13.hostedemail.com (Postfix) with ESMTP id CE1B220002 for ; Tue, 18 Feb 2025 03:15:14 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=XkvByCTU; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf13.hostedemail.com: domain of inwardvessel@gmail.com designates 209.85.216.48 as permitted sender) smtp.mailfrom=inwardvessel@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739848514; a=rsa-sha256; cv=none; b=P6qY899KxIImJNriAvhDpuzy/Kp020CDzYa6MVhAiUYo5+sB5rBi9bwBMT7FhM+vcxy+Zn 1XL64YLODQZJJ80vVbwmutCcsXSlQsebIjPVl18+Scw7/T+Uofgtaw9iLYeKYQ8s7Ge13O XW98QyRDq43qfN9yJvc6kf3lbp+7vFk= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=XkvByCTU; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf13.hostedemail.com: domain of inwardvessel@gmail.com designates 209.85.216.48 as permitted sender) smtp.mailfrom=inwardvessel@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739848514; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vXMdWzZanXBYybZlSePKoVtUVs1ExwxhVBifYgxpSeU=; b=IUhOZ2WJneG5HMNdB58mJ8zKlml4wgUtPoOoi7MwmNI4RjG2kYdd1/yhB2vrSFR8Kc7qrs MpjRjDoXPXX363M6TkHlxaxReoVfjjbTmmDr3DN4RMUhlinPB1KUflyGYyRpNP2iBbQk36 +oqZEO/k/wRxBBMZXEYcG7e2zHC7o3w= Received: by mail-pj1-f48.google.com with SMTP id 98e67ed59e1d1-2fbfc9ff0b9so7512899a91.2 for ; Mon, 17 Feb 2025 19:15:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1739848514; x=1740453314; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vXMdWzZanXBYybZlSePKoVtUVs1ExwxhVBifYgxpSeU=; b=XkvByCTUtvvFQwmcTCOC89pzKAFN+QvM7kyaBxsQnGAiEMrQ+GsX+L2Bq36hcz6JUf bFLBVAPQC/Os7DmwipaRThZ+WYDUIcTtFkv7u8ISacK9GhyeXcE2MAf9L2YCYB3HVcy/ 3CNbbDota22xFGbxAs1/xnphdTbWrGWc3y/rY32o/hla6TzpA8HPagBWUUER0jVDMpfZ PYLp3HWBBxtG/cM/wInRUcKL/4wiaJIDuc0pqommQlMeo77OBYzthjBfAYFVP4ZK03dp s+nVg6/4iDiBAcmYRbi+3QuQX3cECZhOa7LhA7POPQp21yXxvGE7dbyxGAy5Xr2IaCxj 9hCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739848514; x=1740453314; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vXMdWzZanXBYybZlSePKoVtUVs1ExwxhVBifYgxpSeU=; b=DAxvoiNRWlCtXWxXHHA5dtX6KqPY8Ok+z30epTtDd4jBjpFlZ2ZvvxGARzencDW9tp 0u8t4wNeG/Hsp52oP9S1BWdBzqzm7d9hpojL1Yjgcb65ANsCjIdw7LkoMZdVOhifrLmA MBIVr46N0LFm/xKJtCW8NIJvc2Nh7zKsduvScsq2Nq6UbP+2SCRJ8vpoZYO6oZ7EZBzq sEZFlph3oGTNkZXbLvDRReXGqMbJ4Uhkm8mB7DI5eX8Gxe3QW2iTK1Vy7VFZT+B6B2he ppUR9KkjpPOL8rUe+sk6RRYuat2OXUXt0nD+MrmscIfO17h2Vs0xsb4xmfGx2f480Okk IVqg== X-Gm-Message-State: AOJu0YyWeu4jpS117MR0b+CuL0mEFhbD+rPqfsi8NN2kiDWT6BsTK95O EX3z1mod6C+EGos/X5eK/5MFxDyU1vAZ11WxTn8b6YZuD9u7N9e8 X-Gm-Gg: ASbGncs429YwtwPtoTm/j7r5eZmFBEAic8ZWMEDFnC6/oFN+9vUb6OKgosM9GSSkCrT HWs6zOUpRxoHb+MSD12WOTYzyH5vHBZna/zaZ6OHwiD8ifYaMrC1mQuMDwCJX/WmueUdWWQ6dhv RJVvl7t0oxfi5xs4PacTMmMZRap+41HOtH4CjmKzDvn8SvGF25K+ZCbP9i0D3aqrXAO3rnx0guD gXslQd45GCX4WfRFZWPEhBm2OnaEUgCmfSaBq9k5b9l2wIN8OLKdDxdLl7fux2eCpxFaEGB4/6I MSMyq1dsld3Q+ArCXLVBKiG63wwMCTXD3RAGuT0YYPAU3xfTfbxf X-Google-Smtp-Source: AGHT+IFwOfJPEGgpHYiuDd9xb8LF4nSMRJ63khUyuJwnxeuzlQfZkRMasq2TDIXK5ON9SxXG0Sczxw== X-Received: by 2002:a05:6a00:22cc:b0:730:9467:b4af with SMTP id d2e1a72fcca58-732619005bfmr17987615b3a.23.1739848513618; Mon, 17 Feb 2025 19:15:13 -0800 (PST) Received: from saturn.. (c-67-188-127-15.hsd1.ca.comcast.net. [67.188.127.15]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-732425466besm8763451b3a.9.2025.02.17.19.15.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Feb 2025 19:15:13 -0800 (PST) From: JP Kobryn To: shakeel.butt@linux.dev, tj@kernel.org, mhocko@kernel.org, hannes@cmpxchg.org, yosryahmed@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, kernel-team@meta.com Subject: [PATCH 08/11] cgroup: rstat cpu lock indirection Date: Mon, 17 Feb 2025 19:14:45 -0800 Message-ID: <20250218031448.46951-9-inwardvessel@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250218031448.46951-1-inwardvessel@gmail.com> References: <20250218031448.46951-1-inwardvessel@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: CE1B220002 X-Rspamd-Server: rspam12 X-Stat-Signature: 83utpey9aso5gk1pbdi3fq6j8117awzr X-HE-Tag: 1739848514-84060 X-HE-Meta: U2FsdGVkX18iD84KPmZiE+ny3C6CiK503aUMDq4/DL7HOYqzVHIVIRJhj5+EDDNfw8K9nLudqcCLILkNeLU0VYRUJ5ZnLa2uK6rs9o2ATSTrOo7D+lxXRbTc6UBOuCV888xLYqEly71zALIipoGifM4P1g6P4xbV/brCPBdAB+YeJMXUNky9FxD3JdgaDRUnDOI2r2g/gO9sQB+9doeKTC0hsFVa411TGqPcjsmj2ms5k1cmiDLtrsEhUDORDJlZxSKijL/M7vGjR8nvBVIAgDUBSBTt6fW+2OGNLHfYM94cSGOpDxzNPk7ByVAJgDV58etGdGb3F07YOzztvfjAAn0xljFmH5L9Vdwx+zvR4DGVWhMBOK1IGEGBGz473JSMmYQeF64L/Ka/v4Vh6CuRVkR0ETh+bQIt2s2lnG2v6xgGhdZwTmVUI3kvu1ebKM0Yl0JNoxvdUE2KDXYR3l6eQUry8o+3GeGgd8NIEwnz3EFjnZ233Dt4DAZcgQ1/mFkNn8u05JlAYPGigtXAVGdEIKCUvvmAMITKoUnXCJLv1sGoKjzZedKTxhwt6icqhomWCgG7oUDQparnxXX9RaP+SyGaREdM7Dn3SlaGnIoTQ8b0F9cfoWTiGM6dUV95KHour+ItzyTROfs0ySWdWRck//LBi1Z3gyF6zz7CMccf7+vzH10l1YrZAJaXbfKlO1C4PQSIvre3esNVbLSKCP9M/SpB+tV8i5QdtGR0fdxTGUmLo967Ss4Omewf1lNkBjFiWa28MS7posy2tl4aUnF7ixLHru2gmmvHFM+CXXGBjx1O7MH1ivW7evKJzJLED4Dnu4P6lj4dl2vBKiaaMVNSA0a7GH/9T63uzFisFbKwT3pBkXxjuCUhQLtRdwA0yJlG5bvhuhB3N3uY+lZB5oy0Z7HohhkcMiuraGbOe7A/9JQXitasiTM0e37sR/c+HXk7HtJth3fVTxEWvitdtYU OqB2tnCV JDONrgEX68dGKqaAZShgtCqrmjHhA2E75Mv7A65FAWHC2LyyoPV1xiEjpCDl0AH17ujO0kbXhmK85+krkWYRQrGRTGWHknt7jUfol2PwRBJRfDr7sIvy/D4VoUPF7QKjH8qXsCaycs/LFgZ5TnvNnseuIfa00Jl6r983zTkVSm0yE37hlrmylZ2R+KXtcl1d6ex8k86xN5fuwEINuCsSU3OG8zhb/X5UnnV2j6My1clPj5/Fs1JhXshQkGpBgZYafe3gENx2zjhkD3YHDpAfMjCh13XCnR0FLDo/URp1PeZaNymjXMYGK2rIhDF4K9hGagMgMgIk9l5mdeoMwbeJundhcMFKYeX+bSKPucDWxJX8CCmdGl6u2pKhKMA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Where functions access the global per-cpu lock, change their signature to accept the lock instead as a paremeter. Change the code within these functions to only access the parameter. This indirection allows for future code to accept different locks, increasing extensibity. For example, a new lock could be added specifically for the bpf cgroups and it would not contend with the existing lock. Signed-off-by: JP Kobryn Reviewed-by: Shakeel Butt --- kernel/cgroup/rstat.c | 74 +++++++++++++++++++++++++------------------ 1 file changed, 43 insertions(+), 31 deletions(-) diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c index 4cb0f3ffc1db..9f6da3ea3c8c 100644 --- a/kernel/cgroup/rstat.c +++ b/kernel/cgroup/rstat.c @@ -177,7 +177,7 @@ void _cgroup_rstat_cpu_unlock(raw_spinlock_t *lock, int cpu, } static void __cgroup_rstat_updated(struct cgroup_rstat *rstat, int cpu, - struct cgroup_rstat_ops *ops) + struct cgroup_rstat_ops *ops, raw_spinlock_t *cpu_lock) { struct cgroup *cgrp; unsigned long flags; @@ -194,7 +194,7 @@ static void __cgroup_rstat_updated(struct cgroup_rstat *rstat, int cpu, return; cgrp = ops->cgroup_fn(rstat); - flags = _cgroup_rstat_cpu_lock(&cgroup_rstat_cpu_lock, cpu, cgrp, true); + flags = _cgroup_rstat_cpu_lock(cpu_lock, cpu, cgrp, true); /* put @rstat and all ancestors on the corresponding updated lists */ while (true) { @@ -222,7 +222,7 @@ static void __cgroup_rstat_updated(struct cgroup_rstat *rstat, int cpu, rstat = parent; } - _cgroup_rstat_cpu_unlock(&cgroup_rstat_cpu_lock, cpu, cgrp, flags, true); + _cgroup_rstat_cpu_unlock(cpu_lock, cpu, cgrp, flags, true); } /** @@ -236,13 +236,15 @@ static void __cgroup_rstat_updated(struct cgroup_rstat *rstat, int cpu, */ void cgroup_rstat_updated(struct cgroup_subsys_state *css, int cpu) { - __cgroup_rstat_updated(&css->rstat, cpu, &rstat_css_ops); + __cgroup_rstat_updated(&css->rstat, cpu, &rstat_css_ops, + &cgroup_rstat_cpu_lock); } #ifdef CONFIG_CGROUP_BPF __bpf_kfunc void bpf_cgroup_rstat_updated(struct cgroup *cgroup, int cpu) { - __cgroup_rstat_updated(&(cgroup->bpf.rstat), cpu, &rstat_bpf_ops); + __cgroup_rstat_updated(&(cgroup->bpf.rstat), cpu, &rstat_bpf_ops, + &cgroup_rstat_cpu_lock); } #endif /* CONFIG_CGROUP_BPF */ @@ -319,7 +321,8 @@ static struct cgroup_rstat *cgroup_rstat_push_children( * here is the cgroup root whose updated_next can be self terminated. */ static struct cgroup_rstat *cgroup_rstat_updated_list( - struct cgroup_rstat *root, int cpu, struct cgroup_rstat_ops *ops) + struct cgroup_rstat *root, int cpu, struct cgroup_rstat_ops *ops, + raw_spinlock_t *cpu_lock) { struct cgroup_rstat_cpu *rstatc = rstat_cpu(root, cpu); struct cgroup_rstat *head = NULL, *parent, *child; @@ -327,7 +330,7 @@ static struct cgroup_rstat *cgroup_rstat_updated_list( unsigned long flags; cgrp = ops->cgroup_fn(root); - flags = _cgroup_rstat_cpu_lock(&cgroup_rstat_cpu_lock, cpu, cgrp, false); + flags = _cgroup_rstat_cpu_lock(cpu_lock, cpu, cgrp, false); /* Return NULL if this subtree is not on-list */ if (!rstatc->updated_next) @@ -364,7 +367,7 @@ static struct cgroup_rstat *cgroup_rstat_updated_list( if (child != root) head = cgroup_rstat_push_children(head, child, cpu, ops); unlock_ret: - _cgroup_rstat_cpu_unlock(&cgroup_rstat_cpu_lock, cpu, cgrp, flags, false); + _cgroup_rstat_cpu_unlock(cpu_lock, cpu, cgrp, flags, false); return head; } @@ -422,43 +425,46 @@ static inline void __cgroup_rstat_unlock(spinlock_t *lock, /* see cgroup_rstat_flush() */ static void cgroup_rstat_flush_locked(struct cgroup_rstat *rstat, - struct cgroup_rstat_ops *ops) - __releases(&cgroup_rstat_lock) __acquires(&cgroup_rstat_lock) + struct cgroup_rstat_ops *ops, spinlock_t *lock, + raw_spinlock_t *cpu_lock) + __releases(lock) __acquires(lock) { int cpu; - lockdep_assert_held(&cgroup_rstat_lock); + lockdep_assert_held(lock); for_each_possible_cpu(cpu) { struct cgroup_rstat *pos = cgroup_rstat_updated_list( - rstat, cpu, ops); + rstat, cpu, ops, cpu_lock); for (; pos; pos = pos->rstat_flush_next) ops->flush_fn(pos, cpu); /* play nice and yield if necessary */ - if (need_resched() || spin_needbreak(&cgroup_rstat_lock)) { + if (need_resched() || spin_needbreak(lock)) { struct cgroup *cgrp; cgrp = ops->cgroup_fn(rstat); - __cgroup_rstat_unlock(&cgroup_rstat_lock, cgrp, cpu); + __cgroup_rstat_unlock(lock, cgrp, cpu); if (!cond_resched()) cpu_relax(); - __cgroup_rstat_lock(&cgroup_rstat_lock, cgrp, cpu); + __cgroup_rstat_lock(lock, cgrp, cpu); } } } static void __cgroup_rstat_flush(struct cgroup_rstat *rstat, - struct cgroup_rstat_ops *ops) + struct cgroup_rstat_ops *ops, spinlock_t *lock, + raw_spinlock_t *cpu_lock) + __acquires(lock) __releases(lock) { struct cgroup *cgrp; might_sleep(); cgrp = ops->cgroup_fn(rstat); - __cgroup_rstat_lock(&cgroup_rstat_lock, cgrp, -1); - cgroup_rstat_flush_locked(rstat, ops); - __cgroup_rstat_unlock(&cgroup_rstat_lock, cgrp, -1); + __cgroup_rstat_lock(lock, cgrp, -1); + cgroup_rstat_flush_locked(rstat, ops, lock, cpu_lock); + __cgroup_rstat_unlock(lock, cgrp, -1); } /** @@ -476,26 +482,29 @@ static void __cgroup_rstat_flush(struct cgroup_rstat *rstat, */ void cgroup_rstat_flush(struct cgroup_subsys_state *css) { - __cgroup_rstat_flush(&css->rstat, &rstat_css_ops); + __cgroup_rstat_flush(&css->rstat, &rstat_css_ops, + &cgroup_rstat_lock, &cgroup_rstat_cpu_lock); } #ifdef CONFIG_CGROUP_BPF __bpf_kfunc void bpf_cgroup_rstat_flush(struct cgroup *cgroup) { - __cgroup_rstat_flush(&(cgroup->bpf.rstat), &rstat_bpf_ops); + __cgroup_rstat_flush(&(cgroup->bpf.rstat), &rstat_bpf_ops, + &cgroup_rstat_lock, &cgroup_rstat_cpu_lock); } #endif /* CONFIG_CGROUP_BPF */ static void __cgroup_rstat_flush_hold(struct cgroup_rstat *rstat, - struct cgroup_rstat_ops *ops) - __acquires(&cgroup_rstat_lock) + struct cgroup_rstat_ops *ops, spinlock_t *lock, + raw_spinlock_t *cpu_lock) + __acquires(lock) { struct cgroup *cgrp; might_sleep(); cgrp = ops->cgroup_fn(rstat); - __cgroup_rstat_lock(&cgroup_rstat_lock, cgrp, -1); - cgroup_rstat_flush_locked(rstat, ops); + __cgroup_rstat_lock(lock, cgrp, -1); + cgroup_rstat_flush_locked(rstat, ops, lock, cpu_lock); } /** @@ -509,7 +518,8 @@ static void __cgroup_rstat_flush_hold(struct cgroup_rstat *rstat, */ void cgroup_rstat_flush_hold(struct cgroup_subsys_state *css) { - __cgroup_rstat_flush_hold(&css->rstat, &rstat_css_ops); + __cgroup_rstat_flush_hold(&css->rstat, &rstat_css_ops, + &cgroup_rstat_lock, &cgroup_rstat_cpu_lock); } /** @@ -517,13 +527,13 @@ void cgroup_rstat_flush_hold(struct cgroup_subsys_state *css) * @rstat: rstat node used to find associated cgroup used by tracepoint */ static void __cgroup_rstat_flush_release(struct cgroup_rstat *rstat, - struct cgroup_rstat_ops *ops) - __releases(&cgroup_rstat_lock) + struct cgroup_rstat_ops *ops, spinlock_t *lock) + __releases(lock) { struct cgroup *cgrp; cgrp = ops->cgroup_fn(rstat); - __cgroup_rstat_unlock(&cgroup_rstat_lock, cgrp, -1); + __cgroup_rstat_unlock(lock, cgrp, -1); } /** @@ -532,7 +542,8 @@ static void __cgroup_rstat_flush_release(struct cgroup_rstat *rstat, */ void cgroup_rstat_flush_release(struct cgroup_subsys_state *css) { - __cgroup_rstat_flush_release(&css->rstat, &rstat_css_ops); + __cgroup_rstat_flush_release(&css->rstat, &rstat_css_ops, + &cgroup_rstat_lock); } static void __cgroup_rstat_init(struct cgroup_rstat *rstat) @@ -605,7 +616,8 @@ int bpf_cgroup_rstat_init(struct cgroup_bpf *bpf) void bpf_cgroup_rstat_exit(struct cgroup_bpf *bpf) { - __cgroup_rstat_flush(&bpf->rstat, &rstat_bpf_ops); + __cgroup_rstat_flush(&bpf->rstat, &rstat_bpf_ops, + &cgroup_rstat_lock, &cgroup_rstat_cpu_lock); __cgroup_rstat_exit(&bpf->rstat); } #endif /* CONFIG_CGROUP_BPF */