From patchwork Tue Nov 10 01:06:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roman Gushchin X-Patchwork-Id: 11892837 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7C7551391 for ; Tue, 10 Nov 2020 01:06:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 045FA206B6 for ; Tue, 10 Nov 2020 01:06:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=fb.com header.i=@fb.com header.b="PrbGHIrp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 045FA206B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=fb.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 82E5A6B0036; Mon, 9 Nov 2020 20:06:30 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7DF016B005C; Mon, 9 Nov 2020 20:06:30 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6CBF46B005D; Mon, 9 Nov 2020 20:06:30 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 3C9F46B0036 for ; Mon, 9 Nov 2020 20:06:30 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D38FE1EF2 for ; Tue, 10 Nov 2020 01:06:29 +0000 (UTC) X-FDA: 77466718098.15.stone06_6303551272f1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id B204B1814B0C7 for ; Tue, 10 Nov 2020 01:06:29 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=8583d35c3c=guro@fb.com,,RULES_HIT:30029:30045:30054:30064:30080,0,RBL:67.231.153.30:@fb.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100;04ygqimmnc87kc86qra5dzebwjam1opn8f5ekyfitmoqwcxh9x5figukgt9yyga.7tb5ps3wgqntsjmnmrs47rkjjpo4bcjpcxwjqrmud4szsgncb51juqe4pidtbbc.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: stone06_6303551272f1 X-Filterd-Recvd-Size: 5460 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Tue, 10 Nov 2020 01:06:28 +0000 (UTC) Received: from pps.filterd (m0001303.ppops.net [127.0.0.1]) by m0001303.ppops.net (8.16.0.42/8.16.0.42) with SMTP id 0AA0vGCW017712 for ; Mon, 9 Nov 2020 17:06:28 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=facebook; bh=bSvYnbTgZjI4OQHvin/etTFcdCw3ZrCSSRl0xalNJqs=; b=PrbGHIrpZT3ie1lY+7Gk4RKhljy/GKlcDelvGbRZuL0sYOfHU28cNsFRuNScbdntfmV5 6rKJUfZdHUUVWQ5ofSfGau/HOPPYSgF5qzerqtAVyCyqfQZczPTNgfm5T5FIdxYMqnV/ odLVpb6Ge0rFJ7JlU4crBM9glI6NHNa/Q1k= Received: from mail.thefacebook.com ([163.114.132.120]) by m0001303.ppops.net with ESMTP id 34nr4pu0jy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 09 Nov 2020 17:06:28 -0800 Received: from intmgw002.41.prn1.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:11d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1979.3; Mon, 9 Nov 2020 17:06:26 -0800 Received: by devvm1755.vll0.facebook.com (Postfix, from userid 111017) id 23C3323165B1; Mon, 9 Nov 2020 17:06:20 -0800 (PST) From: Roman Gushchin To: Andrew Morton , CC: Shakeel Butt , Johannes Weiner , Michal Hocko , , , Roman Gushchin Subject: [PATCH] mm: memcg/slab: enable slab memory accounting atomically Date: Mon, 9 Nov 2020 17:06:15 -0800 Message-ID: <20201110010615.1273043-1-guro@fb.com> X-Mailer: git-send-email 2.24.1 MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737 definitions=2020-11-09_15:2020-11-05,2020-11-09 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 lowpriorityscore=0 malwarescore=0 mlxlogscore=407 spamscore=0 suspectscore=2 mlxscore=0 bulkscore=0 phishscore=0 clxscore=1015 impostorscore=0 adultscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011100005 X-FB-Internal: deliver X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Many kernel memory accounting paths are guarded by the memcg_kmem_enabled_key static key. It changes it's state during the onlining of the first non-root cgroup. However is doesn't happen atomically: before all call sites will become patched some charges/uncharges can be skipped, resulting in an unbalanced charge. The problem is mostly a theoretical issue, unlikely having a noticeable impact ion the real life. Before the rework of the slab controller we relied at setting kmemcg_id after enabling the memcg_kmem_enabled_key static key. Now we can use the setting of memcg->objcg to enable the slab memory accounting atomically. The patch also removes obsolete comments related to already deleted members of kmem_cache->memcg_params. Signed-off-by: Roman Gushchin Fixes: 10befea91b61 ("mm: memcg/slab: use a single set of kmem_caches for all allocations") --- include/linux/memcontrol.h | 6 ++---- mm/memcontrol.c | 7 ++++--- 2 files changed, 6 insertions(+), 7 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 20108e426f84..01099dfa839c 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -310,7 +310,6 @@ struct mem_cgroup { int tcpmem_pressure; #ifdef CONFIG_MEMCG_KMEM - /* Index in the kmem_cache->memcg_params.memcg_caches array */ int kmemcg_id; enum memcg_kmem_state kmem_state; struct obj_cgroup __rcu *objcg; @@ -1641,9 +1640,8 @@ static inline void memcg_kmem_uncharge_page(struct page *page, int order) } /* - * helper for accessing a memcg's index. It will be used as an index in the - * child cache array in kmem_cache, and also to derive its name. This function - * will return -1 when this is not a kmem-limited memcg. + * A helper for accessing memcg's kmem_id, used for getting + * corresponding LRU lists. */ static inline int memcg_cache_id(struct mem_cgroup *memcg) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 69a2893a6455..267cc68fba05 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3675,17 +3675,18 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) memcg_free_cache_id(memcg_id); return -ENOMEM; } - objcg->memcg = memcg; - rcu_assign_pointer(memcg->objcg, objcg); static_branch_enable(&memcg_kmem_enabled_key); /* * A memory cgroup is considered kmem-online as soon as it gets - * kmemcg_id. Setting the id after enabling static branching will + * objcg. Setting the objcg after enabling static branching will * guarantee no one starts accounting before all call sites are * patched. */ + objcg->memcg = memcg; + rcu_assign_pointer(memcg->objcg, objcg); + memcg->kmemcg_id = memcg_id; memcg->kmem_state = KMEM_ONLINE;