From patchwork Tue Jun 21 12:56:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12889204 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14517C43334 for ; Tue, 21 Jun 2022 12:57:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A40456B0073; Tue, 21 Jun 2022 08:57:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9EF5F8E0005; Tue, 21 Jun 2022 08:57:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 890438E0003; Tue, 21 Jun 2022 08:57:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 78CF66B0073 for ; Tue, 21 Jun 2022 08:57:58 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 241E4120618 for ; Tue, 21 Jun 2022 12:57:58 +0000 (UTC) X-FDA: 79602245436.11.1624503 Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) by imf30.hostedemail.com (Postfix) with ESMTP id BFE6B80010 for ; Tue, 21 Jun 2022 12:57:57 +0000 (UTC) Received: by mail-pf1-f170.google.com with SMTP id k127so7828242pfd.10 for ; Tue, 21 Jun 2022 05:57:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6Y6NgpPc42NYBvW73tDuppRYStJt3qQWgJd6fl6Oef4=; b=uF9OogsqG1RyhL7U+jblJtPj5YWhZ4AiZjE+EQ8j0xRHnV/O89W7gU9jS5UW7Ou5bk ZD/BJsWxtuzMf2V5MCtrrLDGRQKEb68PPNjcqskQI1e8ES4jEzP5uvODBovx4ljRpfFm URbzN150jWgQm4hYBvEsrmj8rhGXLX85rM84Iu+JLS+u0dihChhDk+5OZJPSxxf/Oqad VeEhlbrM6KTndZCc9GE36AjtfHdnyNwn4fq45ma+q1Ua0zo9M43j5cWF8tG4wN4FA4LC nFGonUPSl7pV6F76pm3vWS+XdEQak09XaD2SO1sRKIBVG64sz0Zi2lGZomsjaCGKNCBu NHFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6Y6NgpPc42NYBvW73tDuppRYStJt3qQWgJd6fl6Oef4=; b=Nw23Koul2+GCCtoYDUabG7+XrTKyidA7H66hQ6jhkt3UyJ/ath+t2moYEKM+YKFiWM IWXW/+1WD/GLPbsBv3WmgCvrykrTAKZK0gvAceSELclmUsA91ueZSx9inHvELR8vo/Mr ltQUv71+J23Vf5xI3GflCRnyiiL/xO76Vq+Rn47Ol6FTmDX6c2NQiYXVzqCiWbqJMB4T 8WVbDKcDK7hgZWaG8xE6xX8fopm/v4kKymW+7HU9Rt8U7Fm3snt7BAg4ZB67Cyl9ipbn Fh8HTnxteePMtRXZ4lIVnXUgAPdivkBdklHPk7B/syD/r3PDcB2/jz51N0RXgRCxDcAR Tg5g== X-Gm-Message-State: AJIora8X+vkL9uucH6X9Z5AvYmuMjSYsJGkcinhVF9hruYgF9fx/J5r6 69fo3UMLXcTq5eCiRnIPhrL14A== X-Google-Smtp-Source: AGRyM1sQC5dliKs7RaIb0VSNxBH7/RLYnMsWURiGVbZONIJEcJX14MVUbAJGDDw7hQbIAjk0G7rmSA== X-Received: by 2002:aa7:88cc:0:b0:51c:319e:772c with SMTP id k12-20020aa788cc000000b0051c319e772cmr30033261pff.41.1655816276884; Tue, 21 Jun 2022 05:57:56 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.255]) by smtp.gmail.com with ESMTPSA id e3-20020a170903240300b0015ea3a491a1sm10643134plo.191.2022.06.21.05.57.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jun 2022 05:57:56 -0700 (PDT) From: Muchun Song To: akpm@linux-foundation.org, hannes@cmpxchg.org, longman@redhat.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com Cc: cgroups@vger.kernel.org, duanxiongchun@bytedance.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Muchun Song , =?utf-8?q?Michal_Koutn=C3=BD?= Subject: [PATCH v6 03/11] mm: memcontrol: prepare objcg API for non-kmem usage Date: Tue, 21 Jun 2022 20:56:50 +0800 Message-Id: <20220621125658.64935-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220621125658.64935-1-songmuchun@bytedance.com> References: <20220621125658.64935-1-songmuchun@bytedance.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655816277; a=rsa-sha256; cv=none; b=S2uey+5gjTrTtjflsWZX2KMn8RqyBCGdrXC/I618KMWEfiH9Sh88p9qszBVTUgHuV3Y/SR DTskBbDYGUkmnK//9ctOHc5aiyoEeX/vCGJqEJKefeFATMYfNCCG3DygTahf+fW+tN4sSF xblHJKvweN5sEYNB7smNF2MPng88iio= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655816277; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6Y6NgpPc42NYBvW73tDuppRYStJt3qQWgJd6fl6Oef4=; b=3oc6jwkqgBSIffGaTcCKL5JBWgHvTsG/GTMnuWC/JJqNmdRL3jkgwEoCrxe0ZBVLNiR0np ChM2P3Mh29KyHNMgosLgkyJHTQpjTxevb7BHeNb+oRQebnjgWX2QiChkBpmkGI4FqQS6Fz 1whmM77dtWKqOaBZ0cIBP83/N8vU8Mo= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=uF9Oogsq; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf30.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.170 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=uF9Oogsq; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf30.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.170 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: BFE6B80010 X-Stat-Signature: digoyidj1smnt8a9gyiq5onza74s3zkg X-HE-Tag: 1655816277-801811 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pagecache pages are charged at the allocation time and holding a reference to the original memory cgroup until being reclaimed. Depending on the memory pressure, specific patterns of the page sharing between different cgroups and the cgroup creation and destruction rates, a large number of dying memory cgroups can be pinned by pagecache pages. It makes the page reclaim less efficient and wastes memory. We can convert LRU pages and most other raw memcg pins to the objcg direction to fix this problem, and then the page->memcg will always point to an object cgroup pointer. Therefore, the infrastructure of objcg no longer only serves CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages can reuse it to charge pages. We know that the LRU pages are not accounted at the root level. But the page->memcg_data points to the root_mem_cgroup. So the page->memcg_data of the LRU pages always points to a valid pointer. But the root_mem_cgroup dose not have an object cgroup. If we use obj_cgroup APIs to charge the LRU pages, we should set the page->memcg_data to a root object cgroup. So we also allocate an object cgroup for the root_mem_cgroup. Signed-off-by: Muchun Song Acked-by: Johannes Weiner Reviewed-by: Michal Koutný Acked-by: Roman Gushchin --- include/linux/memcontrol.h | 2 +- mm/memcontrol.c | 56 +++++++++++++++++++++++++++------------------- 2 files changed, 34 insertions(+), 24 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d0c0da7cafb7..111eda6ff1ce 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -321,10 +321,10 @@ struct mem_cgroup { #ifdef CONFIG_MEMCG_KMEM int kmemcg_id; +#endif struct obj_cgroup __rcu *objcg; /* list of inherited objcgs, protected by objcg_lock */ struct list_head objcg_list; -#endif MEMCG_PADDING(_pad2_); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index fc706d6fc265..3c489651d312 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -252,9 +252,9 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr) return container_of(vmpr, struct mem_cgroup, vmpressure); } -#ifdef CONFIG_MEMCG_KMEM static DEFINE_SPINLOCK(objcg_lock); +#ifdef CONFIG_MEMCG_KMEM bool mem_cgroup_kmem_disabled(void) { return cgroup_memory_nokmem; @@ -263,12 +263,10 @@ bool mem_cgroup_kmem_disabled(void) static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg, unsigned int nr_pages); -static void obj_cgroup_release(struct percpu_ref *ref) +static void obj_cgroup_release_bytes(struct obj_cgroup *objcg) { - struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt); unsigned int nr_bytes; unsigned int nr_pages; - unsigned long flags; /* * At this point all allocated objects are freed, and @@ -282,9 +280,9 @@ static void obj_cgroup_release(struct percpu_ref *ref) * 3) CPU1: a process from another memcg is allocating something, * the stock if flushed, * objcg->nr_charged_bytes = PAGE_SIZE - 92 - * 5) CPU0: we do release this object, + * 4) CPU0: we do release this object, * 92 bytes are added to stock->nr_bytes - * 6) CPU0: stock is flushed, + * 5) CPU0: stock is flushed, * 92 bytes are added to objcg->nr_charged_bytes * * In the result, nr_charged_bytes == PAGE_SIZE. @@ -296,6 +294,19 @@ static void obj_cgroup_release(struct percpu_ref *ref) if (nr_pages) obj_cgroup_uncharge_pages(objcg, nr_pages); +} +#else +static inline void obj_cgroup_release_bytes(struct obj_cgroup *objcg) +{ +} +#endif + +static void obj_cgroup_release(struct percpu_ref *ref) +{ + struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt); + unsigned long flags; + + obj_cgroup_release_bytes(objcg); spin_lock_irqsave(&objcg_lock, flags); list_del(&objcg->list); @@ -324,10 +335,10 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } -static void memcg_reparent_objcgs(struct mem_cgroup *memcg, - struct mem_cgroup *parent) +static void memcg_reparent_objcgs(struct mem_cgroup *memcg) { struct obj_cgroup *objcg, *iter; + struct mem_cgroup *parent = parent_mem_cgroup(memcg); objcg = rcu_replace_pointer(memcg->objcg, NULL, true); @@ -346,6 +357,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg, percpu_ref_kill(&objcg->refcnt); } +#ifdef CONFIG_MEMCG_KMEM /* * A lot of the calls to the cache allocation functions are expected to be * inlined by the compiler. Since the calls to memcg_slab_pre_alloc_hook() are @@ -3651,21 +3663,12 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css, #ifdef CONFIG_MEMCG_KMEM static int memcg_online_kmem(struct mem_cgroup *memcg) { - struct obj_cgroup *objcg; - if (cgroup_memory_nokmem) return 0; if (unlikely(mem_cgroup_is_root(memcg))) return 0; - objcg = obj_cgroup_alloc(); - if (!objcg) - return -ENOMEM; - - objcg->memcg = memcg; - rcu_assign_pointer(memcg->objcg, objcg); - static_branch_enable(&memcg_kmem_enabled_key); memcg->kmemcg_id = memcg->id.id; @@ -3675,17 +3678,13 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) static void memcg_offline_kmem(struct mem_cgroup *memcg) { - struct mem_cgroup *parent; - if (cgroup_memory_nokmem) return; if (unlikely(mem_cgroup_is_root(memcg))) return; - parent = parent_mem_cgroup(memcg); - memcg_reparent_objcgs(memcg, parent); - memcg_reparent_list_lrus(memcg, parent); + memcg_reparent_list_lrus(memcg, parent_mem_cgroup(memcg)); } #else static int memcg_online_kmem(struct mem_cgroup *memcg) @@ -5190,8 +5189,8 @@ static struct mem_cgroup *mem_cgroup_alloc(void) memcg->socket_pressure = jiffies; #ifdef CONFIG_MEMCG_KMEM memcg->kmemcg_id = -1; - INIT_LIST_HEAD(&memcg->objcg_list); #endif + INIT_LIST_HEAD(&memcg->objcg_list); #ifdef CONFIG_CGROUP_WRITEBACK INIT_LIST_HEAD(&memcg->cgwb_list); for (i = 0; i < MEMCG_CGWB_FRN_CNT; i++) @@ -5256,6 +5255,7 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) static int mem_cgroup_css_online(struct cgroup_subsys_state *css) { struct mem_cgroup *memcg = mem_cgroup_from_css(css); + struct obj_cgroup *objcg; if (memcg_online_kmem(memcg)) goto remove_id; @@ -5268,6 +5268,13 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) if (alloc_shrinker_info(memcg)) goto offline_kmem; + objcg = obj_cgroup_alloc(); + if (!objcg) + goto free_shrinker; + + objcg->memcg = memcg; + rcu_assign_pointer(memcg->objcg, objcg); + /* Online state pins memcg ID, memcg ID pins CSS */ refcount_set(&memcg->id.ref, 1); css_get(css); @@ -5276,6 +5283,8 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) queue_delayed_work(system_unbound_wq, &stats_flush_dwork, 2UL*HZ); return 0; +free_shrinker: + free_shrinker_info(memcg); offline_kmem: memcg_offline_kmem(memcg); remove_id: @@ -5303,6 +5312,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) page_counter_set_min(&memcg->memory, 0); page_counter_set_low(&memcg->memory, 0); + memcg_reparent_objcgs(memcg); memcg_offline_kmem(memcg); reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg);