From patchwork Thu Sep 5 21:45:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roman Gushchin X-Patchwork-Id: 11133969 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6B0991599 for ; Thu, 5 Sep 2019 21:46:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3311F20828 for ; Thu, 5 Sep 2019 21:46:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=fb.com header.i=@fb.com header.b="b0xo5sei" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3311F20828 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=fb.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 03CE36B0003; Thu, 5 Sep 2019 17:46:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F309A6B0005; Thu, 5 Sep 2019 17:46:10 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF78A6B0007; Thu, 5 Sep 2019 17:46:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0075.hostedemail.com [216.40.44.75]) by kanga.kvack.org (Postfix) with ESMTP id BD15C6B0003 for ; Thu, 5 Sep 2019 17:46:10 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 5A775180AD7C3 for ; Thu, 5 Sep 2019 21:46:10 +0000 (UTC) X-FDA: 75902200500.08.pipe14_8d1e2a1eb390b X-Spam-Summary: 2,0,0,83edf7d759c261d0,d41d8cd98f00b204,prvs=4151d11f3c=guro@fb.com,::mhocko@kernel.org:hannes@cmpxchg.org:linux-kernel@vger.kernel.org:kernel-team@fb.com:shakeelb@google.com:vdavydov.dev@gmail.com:longman@redhat.com:guro@fb.com,RULES_HIT:1:2:41:355:379:541:800:960:966:973:988:989:1260:1261:1277:1313:1314:1345:1359:1437:1516:1518:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2693:2731:2741:2918:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:4050:4321:4385:4605:5007:6119:6120:6261:6653:7901:7903:8784:9010:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12555:12683:12895:12986:13141:13161:13229:13230:14096:14097:14394:21080:21433:21450:21451:21627:30034:30054:30064:30080,0,RBL:67.231.153.30:@fb.com:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: pipe14_8d1e2a1eb390b X-Filterd-Recvd-Size: 10809 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Thu, 5 Sep 2019 21:46:09 +0000 (UTC) Received: from pps.filterd (m0001303.ppops.net [127.0.0.1]) by m0001303.ppops.net (8.16.0.42/8.16.0.42) with SMTP id x85LgVK3015053 for ; Thu, 5 Sep 2019 14:46:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=facebook; bh=jUYA9V6/76PQuUuUoRBz8Z4hZkBQOhxwB/mAKjBuibo=; b=b0xo5seiQRBzPeolJLt20bUGceN0SA+D2w/06RSqqA+HMXoaYd6WRwRbLZYWDneA149q VPBE0N6l+ymMTHo1prbnguhUIGr1kM2zd/X5hGkOuv81ZHRzn5vbm1n0TlqELpIW/tRh PZaFZPhk3kh5CbM+NrCTLNRCVc5vYyEvHxo= Received: from maileast.thefacebook.com ([163.114.130.16]) by m0001303.ppops.net with ESMTP id 2utxy63a75-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Sep 2019 14:46:08 -0700 Received: from mx-out.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Thu, 5 Sep 2019 14:46:07 -0700 Received: by devvm2643.prn2.facebook.com (Postfix, from userid 111017) id A65A717229DFA; Thu, 5 Sep 2019 14:46:06 -0700 (PDT) Smtp-Origin-Hostprefix: devvm From: Roman Gushchin Smtp-Origin-Hostname: devvm2643.prn2.facebook.com To: CC: Michal Hocko , Johannes Weiner , , , Shakeel Butt , Vladimir Davydov , Waiman Long , Roman Gushchin Smtp-Origin-Cluster: prn2c23 Subject: [PATCH RFC 02/14] mm: memcg: introduce mem_cgroup_ptr Date: Thu, 5 Sep 2019 14:45:46 -0700 Message-ID: <20190905214553.1643060-3-guro@fb.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190905214553.1643060-1-guro@fb.com> References: <20190905214553.1643060-1-guro@fb.com> X-FB-Internal: Safe MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.70,1.0.8 definitions=2019-09-05_08:2019-09-04,2019-09-05 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 mlxscore=0 clxscore=1015 priorityscore=1501 spamscore=0 lowpriorityscore=0 adultscore=0 malwarescore=0 suspectscore=4 bulkscore=0 phishscore=0 mlxlogscore=999 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1906280000 definitions=main-1909050202 X-FB-Internal: deliver X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This commit introduces mem_cgroup_ptr structure and corresponding API. It implements a pointer to a memory cgroup with a built-in reference counter. The main goal of it is to implement reparenting efficiently. If a number of objects (e.g. slab pages) have to keep a pointer and a reference to a memory cgroup, they can use mem_cgroup_ptr instead. On reparenting, only one mem_cgroup_ptr->memcg pointer has to be changed, instead of walking over all accounted objects. mem_cgroup_ptr holds a single reference to the corresponding memory cgroup. Because it's initialized before the css reference counter, css's refcounter can't be bumped at allocation time. Instead, it's bumped on reparenting which happens during offlining. A cgroup is never released online, so it's fine. mem_cgroup_ptr is released using rcu, so memcg->kmem_memcg_ptr can be accessed in a rcu read section. On reparenting it's atomically switched to NULL. If the reader gets NULL, it can just read parent's kmem_memcg_ptr instead. Each memory cgroup contains a list of kmem_memcg_ptrs. On reparenting the list is spliced into the parent's list. The list is protected using the css set lock. Signed-off-by: Roman Gushchin --- include/linux/memcontrol.h | 50 ++++++++++++++++++++++ mm/memcontrol.c | 87 ++++++++++++++++++++++++++++++++++++-- 2 files changed, 133 insertions(+), 4 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 120d39066148..dd5ebfe5a86c 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -23,6 +23,7 @@ #include struct mem_cgroup; +struct mem_cgroup_ptr; struct page; struct mm_struct; struct kmem_cache; @@ -197,6 +198,22 @@ struct memcg_cgwb_frn { int memcg_id; /* memcg->css.id of foreign inode */ u64 at; /* jiffies_64 at the time of dirtying */ struct wb_completion done; /* tracks in-flight foreign writebacks */ +} + +/* + * A pointer to a memory cgroup with a built-in reference counter. + * For a use as an intermediate object to simplify reparenting of + * objects charged to the cgroup. The memcg pointer can be switched + * to the parent cgroup without a need to modify all objects + * which hold the reference to the cgroup. + */ +struct mem_cgroup_ptr { + struct percpu_ref refcnt; + struct mem_cgroup *memcg; + union { + struct list_head list; + struct rcu_head rcu; + }; }; /* @@ -312,6 +329,8 @@ struct mem_cgroup { int kmemcg_id; enum memcg_kmem_state kmem_state; struct list_head kmem_caches; + struct mem_cgroup_ptr __rcu *kmem_memcg_ptr; + struct list_head kmem_memcg_ptr_list; #endif int last_scanned_node; @@ -440,6 +459,21 @@ struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){ return css ? container_of(css, struct mem_cgroup, css) : NULL; } +static inline bool mem_cgroup_ptr_tryget(struct mem_cgroup_ptr *ptr) +{ + return percpu_ref_tryget(&ptr->refcnt); +} + +static inline void mem_cgroup_ptr_get(struct mem_cgroup_ptr *ptr) +{ + percpu_ref_get(&ptr->refcnt); +} + +static inline void mem_cgroup_ptr_put(struct mem_cgroup_ptr *ptr) +{ + percpu_ref_put(&ptr->refcnt); +} + static inline void mem_cgroup_put(struct mem_cgroup *memcg) { if (memcg) @@ -1433,6 +1467,22 @@ static inline int memcg_cache_id(struct mem_cgroup *memcg) return memcg ? memcg->kmemcg_id : -1; } +static inline struct mem_cgroup_ptr * +mem_cgroup_get_kmem_ptr(struct mem_cgroup *memcg) +{ + struct mem_cgroup_ptr *memcg_ptr; + + rcu_read_lock(); + do { + memcg_ptr = rcu_dereference(memcg->kmem_memcg_ptr); + if (memcg_ptr && mem_cgroup_ptr_tryget(memcg_ptr)) + break; + } while ((memcg = parent_mem_cgroup(memcg))); + rcu_read_unlock(); + + return memcg_ptr; +} + #else static inline int memcg_kmem_charge(struct page *page, gfp_t gfp, int order) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index effefcec47b3..cb9adb31360e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -266,6 +266,77 @@ struct cgroup_subsys_state *vmpressure_to_css(struct vmpressure *vmpr) } #ifdef CONFIG_MEMCG_KMEM +extern spinlock_t css_set_lock; + +static void memcg_ptr_release(struct percpu_ref *ref) +{ + struct mem_cgroup_ptr *ptr = container_of(ref, struct mem_cgroup_ptr, + refcnt); + unsigned long flags; + + spin_lock_irqsave(&css_set_lock, flags); + list_del(&ptr->list); + spin_unlock_irqrestore(&css_set_lock, flags); + + mem_cgroup_put(ptr->memcg); + percpu_ref_exit(ref); + kfree_rcu(ptr, rcu); +} + +static int memcg_init_kmem_memcg_ptr(struct mem_cgroup *memcg) +{ + struct mem_cgroup_ptr *kmem_memcg_ptr; + int ret; + + kmem_memcg_ptr = kmalloc(sizeof(struct mem_cgroup_ptr), GFP_KERNEL); + if (!kmem_memcg_ptr) + return -ENOMEM; + + ret = percpu_ref_init(&kmem_memcg_ptr->refcnt, memcg_ptr_release, + 0, GFP_KERNEL); + if (ret) { + kfree(kmem_memcg_ptr); + return ret; + } + + kmem_memcg_ptr->memcg = memcg; + INIT_LIST_HEAD(&kmem_memcg_ptr->list); + rcu_assign_pointer(memcg->kmem_memcg_ptr, kmem_memcg_ptr); + list_add(&kmem_memcg_ptr->list, &memcg->kmem_memcg_ptr_list); + return 0; +} + +static void memcg_reparent_kmem_memcg_ptr(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + unsigned int nr_reparented = 0; + struct mem_cgroup_ptr *memcg_ptr = NULL; + + rcu_swap_protected(memcg->kmem_memcg_ptr, memcg_ptr, true); + percpu_ref_kill(&memcg_ptr->refcnt); + + /* + * kmem_memcg_ptr is initialized before css refcounter, so until now + * it doesn't hold a reference to the memcg. Bump it here. + */ + css_get(&memcg->css); + + spin_lock_irq(&css_set_lock); + list_for_each_entry(memcg_ptr, &memcg->kmem_memcg_ptr_list, list) { + xchg(&memcg_ptr->memcg, parent); + nr_reparented++; + } + if (nr_reparented) + list_splice(&memcg->kmem_memcg_ptr_list, + &parent->kmem_memcg_ptr_list); + spin_unlock_irq(&css_set_lock); + + if (nr_reparented) { + css_get_many(&parent->css, nr_reparented); + css_put_many(&memcg->css, nr_reparented); + } +} + /* * This will be the memcg's index in each cache's ->memcg_params.memcg_caches. * The main reason for not using cgroup id for this: @@ -3554,7 +3625,7 @@ static void memcg_flush_percpu_vmevents(struct mem_cgroup *memcg) #ifdef CONFIG_MEMCG_KMEM static int memcg_online_kmem(struct mem_cgroup *memcg) { - int memcg_id; + int memcg_id, ret; if (cgroup_memory_nokmem) return 0; @@ -3566,6 +3637,12 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) if (memcg_id < 0) return memcg_id; + ret = memcg_init_kmem_memcg_ptr(memcg); + if (ret) { + memcg_free_cache_id(memcg_id); + return ret; + } + static_branch_inc(&memcg_kmem_enabled_key); /* * A memory cgroup is considered kmem-online as soon as it gets @@ -3601,12 +3678,13 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) parent = root_mem_cgroup; /* - * Deactivate and reparent kmem_caches. Then flush percpu - * slab statistics to have precise values at the parent and - * all ancestor levels. It's required to keep slab stats + * Deactivate and reparent kmem_caches and reparent kmem_memcg_ptr. + * Then flush percpu slab statistics to have precise values at the + * parent and all ancestor levels. It's required to keep slab stats * accurate after the reparenting of kmem_caches. */ memcg_deactivate_kmem_caches(memcg, parent); + memcg_reparent_kmem_memcg_ptr(memcg, parent); memcg_flush_percpu_vmstats(memcg, true); kmemcg_id = memcg->kmemcg_id; @@ -5171,6 +5249,7 @@ static struct mem_cgroup *mem_cgroup_alloc(void) memcg->socket_pressure = jiffies; #ifdef CONFIG_MEMCG_KMEM memcg->kmemcg_id = -1; + INIT_LIST_HEAD(&memcg->kmem_memcg_ptr_list); #endif #ifdef CONFIG_CGROUP_WRITEBACK INIT_LIST_HEAD(&memcg->cgwb_list);