From patchwork Mon Jan 27 17:34:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roman Gushchin X-Patchwork-Id: 11352981 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 78C12924 for ; Mon, 27 Jan 2020 17:38:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 38204214D8 for ; Mon, 27 Jan 2020 17:38:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=fb.com header.i=@fb.com header.b="G/q2/p4S" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 38204214D8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=fb.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 77A6D6B000E; Mon, 27 Jan 2020 12:38:16 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 72B3C6B0010; Mon, 27 Jan 2020 12:38:16 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 61A106B0266; Mon, 27 Jan 2020 12:38:16 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0048.hostedemail.com [216.40.44.48]) by kanga.kvack.org (Postfix) with ESMTP id 486396B000E for ; Mon, 27 Jan 2020 12:38:16 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id E4C08180AD801 for ; Mon, 27 Jan 2020 17:38:15 +0000 (UTC) X-FDA: 76424122950.08.pet09_6533468bb554d X-Spam-Summary: 2,0,0,d64660c2eb83d0ec,d41d8cd98f00b204,prvs=829571e488=guro@fb.com,::akpm@linux-foundation.org:mhocko@kernel.org:hannes@cmpxchg.org:shakeelb@google.com:vdavydov.dev@gmail.com:linux-kernel@vger.kernel.org:kernel-team@fb.com:bharata@linux.ibm.com:laoar.shao@gmail.com:guro@fb.com,RULES_HIT:2:41:69:355:379:541:800:960:973:988:989:1260:1261:1277:1313:1314:1345:1359:1437:1516:1518:1535:1605:1730:1747:1777:1792:2198:2199:2393:2559:2562:2731:2899:3138:3139:3140:3141:3142:3865:3866:3867:3868:3872:4049:4120:4321:4605:5007:6261:6653:8957:9592:10004:11026:11233:11473:11658:11914:12043:12114:12296:12297:12438:12555:12683:12895:12986:13161:13229:13255:13869:14096:14097:14394:21080:21433:21450:21451:21611:21627:21740:21990:30005:30012:30034:30054:30064,0,RBL:67.231.153.30:@fb.com:.lbl8.mailshell.net-62.12.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:28,LUA_SUMMARY:none X-HE-Tag: pet09_6533468bb554d X-Filterd-Recvd-Size: 9112 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Mon, 27 Jan 2020 17:38:15 +0000 (UTC) Received: from pps.filterd (m0001303.ppops.net [127.0.0.1]) by m0001303.ppops.net (8.16.0.42/8.16.0.42) with SMTP id 00RHXrB9012897 for ; Mon, 27 Jan 2020 09:38:14 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=facebook; bh=ztBpsdVo51oFvkFNQcwshJWMAlg+RZRtMYumlnTzYIo=; b=G/q2/p4SZrP/pQdx4wmCpLLfgF9eoR6LWGBXTm/xcv6rcR/pbo0XfoOZQt2DYljwgIsT xvHE9d5sekcG2zwP8AhmV6MaiZPmN4lRePVbTSh9N+BQC4E2IkNi27NMf0uSDjvAnIkn jgNjzoK+2XZCQEOCofSxgZNiz0PJNsupPfY= Received: from mail.thefacebook.com ([163.114.132.120]) by m0001303.ppops.net with ESMTP id 2xsdm7d7hv-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 27 Jan 2020 09:38:14 -0800 Received: from intmgw002.41.prn1.facebook.com (2620:10d:c085:108::4) by mail.thefacebook.com (2620:10d:c085:21d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1779.2; Mon, 27 Jan 2020 09:38:12 -0800 Received: by devvm2643.prn2.facebook.com (Postfix, from userid 111017) id 9244A1DFEFC75; Mon, 27 Jan 2020 09:35:07 -0800 (PST) Smtp-Origin-Hostprefix: devvm From: Roman Gushchin Smtp-Origin-Hostname: devvm2643.prn2.facebook.com To: , Andrew Morton CC: Michal Hocko , Johannes Weiner , Shakeel Butt , Vladimir Davydov , , , Bharata B Rao , Yafang Shao , Roman Gushchin Smtp-Origin-Cluster: prn2c23 Subject: [PATCH v2 06/28] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge() Date: Mon, 27 Jan 2020 09:34:31 -0800 Message-ID: <20200127173453.2089565-7-guro@fb.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200127173453.2089565-1-guro@fb.com> References: <20200127173453.2089565-1-guro@fb.com> X-FB-Internal: Safe MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138,18.0.572 definitions=2020-01-27_06:2020-01-24,2020-01-27 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 mlxscore=0 suspectscore=0 spamscore=0 bulkscore=0 mlxlogscore=999 adultscore=0 priorityscore=1501 impostorscore=0 phishscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1911200001 definitions=main-2001270141 X-FB-Internal: deliver X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Drop the _memcg suffix from (__)memcg_kmem_(un)charge functions. It's shorter and more obvious. These are the most basic functions which are just (un)charging the given cgroup with the given amount of pages. Also fix up the corresponding comments. Signed-off-by: Roman Gushchin Reviewed-by: Shakeel Butt Acked-by: Johannes Weiner --- include/linux/memcontrol.h | 19 +++++++++--------- mm/memcontrol.c | 40 +++++++++++++++++++------------------- mm/slab.h | 4 ++-- 3 files changed, 31 insertions(+), 32 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 851c373edb74..c372bed6be80 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1362,12 +1362,11 @@ struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep); void memcg_kmem_put_cache(struct kmem_cache *cachep); #ifdef CONFIG_MEMCG_KMEM +int __memcg_kmem_charge(struct mem_cgroup *memcg, gfp_t gfp, + unsigned int nr_pages); +void __memcg_kmem_uncharge(struct mem_cgroup *memcg, unsigned int nr_pages); int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order); void __memcg_kmem_uncharge_page(struct page *page, int order); -int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, - unsigned int nr_pages); -void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg, - unsigned int nr_pages); extern struct static_key_false memcg_kmem_enabled_key; extern struct workqueue_struct *memcg_kmem_cache_wq; @@ -1403,19 +1402,19 @@ static inline void memcg_kmem_uncharge_page(struct page *page, int order) __memcg_kmem_uncharge_page(page, order); } -static inline int memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, - unsigned int nr_pages) +static inline int memcg_kmem_charge(struct mem_cgroup *memcg, gfp_t gfp, + unsigned int nr_pages) { if (memcg_kmem_enabled()) - return __memcg_kmem_charge_memcg(memcg, gfp, nr_pages); + return __memcg_kmem_charge(memcg, gfp, nr_pages); return 0; } -static inline void memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg, - unsigned int nr_pages) +static inline void memcg_kmem_uncharge(struct mem_cgroup *memcg, + unsigned int nr_pages) { if (memcg_kmem_enabled()) - __memcg_kmem_uncharge_memcg(memcg, nr_pages); + __memcg_kmem_uncharge(memcg, nr_pages); } /* diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1561ef984104..8798702d165b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2819,15 +2819,15 @@ void memcg_kmem_put_cache(struct kmem_cache *cachep) } /** - * __memcg_kmem_charge_memcg: charge a kmem page + * __memcg_kmem_charge: charge a number of kernel pages to a memcg * @memcg: memory cgroup to charge * @gfp: reclaim mode * @nr_pages: number of pages to charge * * Returns 0 on success, an error code on failure. */ -int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, - unsigned int nr_pages) +int __memcg_kmem_charge(struct mem_cgroup *memcg, gfp_t gfp, + unsigned int nr_pages) { struct page_counter *counter; int ret; @@ -2854,6 +2854,21 @@ int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, return 0; } +/** + * __memcg_kmem_uncharge: uncharge a number of kernel pages from a memcg + * @memcg: memcg to uncharge + * @nr_pages: number of pages to uncharge + */ +void __memcg_kmem_uncharge(struct mem_cgroup *memcg, unsigned int nr_pages) +{ + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) + page_counter_uncharge(&memcg->kmem, nr_pages); + + page_counter_uncharge(&memcg->memory, nr_pages); + if (do_memsw_account()) + page_counter_uncharge(&memcg->memsw, nr_pages); +} + /** * __memcg_kmem_charge_page: charge a kmem page to the current memory cgroup * @page: page to charge @@ -2872,7 +2887,7 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) memcg = get_mem_cgroup_from_current(); if (!mem_cgroup_is_root(memcg)) { - ret = __memcg_kmem_charge_memcg(memcg, gfp, 1 << order); + ret = __memcg_kmem_charge(memcg, gfp, 1 << order); if (!ret) { page->mem_cgroup = memcg; __SetPageKmemcg(page); @@ -2882,21 +2897,6 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) return ret; } -/** - * __memcg_kmem_uncharge_memcg: uncharge a kmem page - * @memcg: memcg to uncharge - * @nr_pages: number of pages to uncharge - */ -void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg, - unsigned int nr_pages) -{ - if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) - page_counter_uncharge(&memcg->kmem, nr_pages); - - page_counter_uncharge(&memcg->memory, nr_pages); - if (do_memsw_account()) - page_counter_uncharge(&memcg->memsw, nr_pages); -} /** * __memcg_kmem_uncharge_page: uncharge a kmem page * @page: page to uncharge @@ -2911,7 +2911,7 @@ void __memcg_kmem_uncharge_page(struct page *page, int order) return; VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page); - __memcg_kmem_uncharge_memcg(memcg, nr_pages); + __memcg_kmem_uncharge(memcg, nr_pages); page->mem_cgroup = NULL; /* slab pages do not have PageKmemcg flag set */ diff --git a/mm/slab.h b/mm/slab.h index a7ed8b422d8f..d943264f0f09 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -366,7 +366,7 @@ static __always_inline int memcg_charge_slab(struct page *page, return 0; } - ret = memcg_kmem_charge_memcg(memcg, gfp, nr_pages); + ret = memcg_kmem_charge(memcg, gfp, nr_pages); if (ret) goto out; @@ -397,7 +397,7 @@ static __always_inline void memcg_uncharge_slab(struct page *page, int order, if (likely(!mem_cgroup_is_root(memcg))) { lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page)); mod_lruvec_state(lruvec, cache_vmstat_idx(s), -nr_pages); - memcg_kmem_uncharge_memcg(memcg, nr_pages); + memcg_kmem_uncharge(memcg, nr_pages); } else { mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s), -nr_pages);