From patchwork Thu Sep 24 19:27:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roman Gushchin X-Patchwork-Id: 11798169 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A509112C for ; Thu, 24 Sep 2020 19:29:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 000AA221EB for ; Thu, 24 Sep 2020 19:29:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=fb.com header.i=@fb.com header.b="NvjLJjms" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 000AA221EB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=fb.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 093F06B0068; Thu, 24 Sep 2020 15:29:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 06D8C6B006C; Thu, 24 Sep 2020 15:29:38 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E4C8B6B006E; Thu, 24 Sep 2020 15:29:38 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0038.hostedemail.com [216.40.44.38]) by kanga.kvack.org (Postfix) with ESMTP id C7BDF6B0068 for ; Thu, 24 Sep 2020 15:29:38 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 86507181AE865 for ; Thu, 24 Sep 2020 19:29:38 +0000 (UTC) X-FDA: 77298944436.22.milk54_2013d9127161 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 0673418038E67 for ; Thu, 24 Sep 2020 19:29:37 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=6536bdcc23=guro@fb.com,,RULES_HIT:30054:30064:30070,0,RBL:67.231.153.30:@fb.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10;04yfzogycarqinx5pykudts5ctxb1ypms95r7n3sarxygfbrkkx6xs8tnb61smf.g7xhrqcqx4g7buc8tz7wnxid3c76sz3a8wa5z3a3bad9497nfp9roicn7fp51on.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: milk54_2013d9127161 X-Filterd-Recvd-Size: 10580 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Thu, 24 Sep 2020 19:29:37 +0000 (UTC) Received: from pps.filterd (m0001303.ppops.net [127.0.0.1]) by m0001303.ppops.net (8.16.0.42/8.16.0.42) with SMTP id 08OJSrj0007793 for ; Thu, 24 Sep 2020 12:29:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=9gQy5xLBq49UGK+xsI8YY1V9HrN13ZM+A1NA7q3Lcas=; b=NvjLJjmsn4+rIsMDynbm4fUncJRswDTxEnncTN+3xMxHvbX5SRQCWQGDOgKizfdkvbc2 pa7Bsgpqcmdh+r2XyPbBnIoMFKJVvoG3ZY4zQIKU9477KBHgq1eNt7kXAQkQFql7uv/6 36CwoTtMwfzfc5kaK730iZIOUffyqOPhJDM= Received: from mail.thefacebook.com ([163.114.132.120]) by m0001303.ppops.net with ESMTP id 33qsp7knr7-6 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 24 Sep 2020 12:29:36 -0700 Received: from intmgw002.41.prn1.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:11d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1979.3; Thu, 24 Sep 2020 12:27:17 -0700 Received: by devvm1755.vll0.facebook.com (Postfix, from userid 111017) id 7B00C99237D; Thu, 24 Sep 2020 12:27:09 -0700 (PDT) From: Roman Gushchin To: Andrew Morton CC: Shakeel Butt , Johannes Weiner , Michal Hocko , , , , Roman Gushchin Subject: [PATCH v2 4/4] mm: convert page kmemcg type to a page memcg flag Date: Thu, 24 Sep 2020 12:27:06 -0700 Message-ID: <20200924192706.3075680-5-guro@fb.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200924192706.3075680-1-guro@fb.com> References: <20200924192706.3075680-1-guro@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-09-24_14:2020-09-24,2020-09-24 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 malwarescore=0 phishscore=0 adultscore=0 clxscore=1015 priorityscore=1501 spamscore=0 mlxlogscore=973 lowpriorityscore=0 bulkscore=0 impostorscore=0 mlxscore=0 suspectscore=2 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009240140 X-FB-Internal: deliver X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: PageKmemcg flag is currently defined as a page type (like buddy, offline, table and guard). Semantically it means that the page was accounted as a kernel memory by the page allocator and has to be uncharged on the release. As a side effect of defining the flag as a page type, the accounted page can't be mapped to userspace (look at page_has_type() and comments above). In particular, this blocks the accounting of vmalloc-backed memory used by some bpf maps, because these maps do map the memory to userspace. One option is to fix it by complicating the access to page->mapcount, which provides some free bits for page->page_type. But it's way better to move this flag into page->memcg_data flags. Indeed, the flag makes no sense without enabled memory cgroups and memory cgroup pointer set in particular. This commit replaces PageKmemcg() and __SetPageKmemcg() with PageMemcgKmem() and SetPageMemcgKmem(). __ClearPageKmemcg() can be simple deleted because clear_page_mem_cgroup() already does the job. As a bonus, on !CONFIG_MEMCG build the PageMemcgKmem() check will be compiled out. Signed-off-by: Roman Gushchin Reviewed-by: Shakeel Butt --- include/linux/memcontrol.h | 54 +++++++++++++++++++++++++++++++++++--- include/linux/page-flags.h | 11 ++------ mm/memcontrol.c | 14 +++------- mm/page_alloc.c | 2 +- 4 files changed, 58 insertions(+), 23 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index a498a7368cff..b8dcf4047f05 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -346,6 +346,8 @@ extern struct mem_cgroup *root_mem_cgroup; enum page_memcg_flags { /* page->memcg_data is a pointer to an objcgs vector */ PG_MEMCG_OBJ_CGROUPS, + /* page has been accounted as a non-slab kernel page */ + PG_MEMCG_KMEM, /* the next bit after the last actual flag */ PG_MEMCG_LAST_FLAG, }; @@ -363,8 +365,12 @@ enum page_memcg_flags { */ static inline struct mem_cgroup *page_mem_cgroup(struct page *page) { + unsigned long memcg_data = page->memcg_data; + VM_BUG_ON_PAGE(PageSlab(page), page); - return (struct mem_cgroup *)page->memcg_data; + VM_BUG_ON_PAGE(test_bit(PG_MEMCG_OBJ_CGROUPS, &memcg_data), page); + + return (struct mem_cgroup *)(memcg_data & ~MEMCG_FLAGS_MASK); } /* @@ -383,7 +389,7 @@ static inline struct mem_cgroup *page_mem_cgroup_check(struct page *page) if (test_bit(PG_MEMCG_OBJ_CGROUPS, &memcg_data)) return NULL; - return (struct mem_cgroup *)memcg_data; + return (struct mem_cgroup *)(memcg_data & ~MEMCG_FLAGS_MASK); } /* @@ -412,6 +418,36 @@ static inline void clear_page_mem_cgroup(struct page *page) page->memcg_data = 0; } +/* + * PageMemcgKmem - check if the page has MemcgKmem flag set + * @page: a pointer to the page struct + * + * Checks if the page has MemcgKmem flag set. The caller must ensure that + * the page has an associated memory cgroup. It's not safe to call this function + * against some types of pages, e.g. slab pages. + */ +static inline bool PageMemcgKmem(struct page *page) +{ + VM_BUG_ON_PAGE(test_bit(PG_MEMCG_OBJ_CGROUPS, &page->memcg_data), page); + return test_bit(PG_MEMCG_KMEM, &page->memcg_data); +} + +/* + * SetPageMemcgKmem - set the page's MemcgKmem flag + * @page: a pointer to the page struct + * + * Set the page's MemcgKmem flag. The caller must ensure that the page has + * an associated memory cgroup. It's not safe to call this function + * against some types of pages, e.g. slab pages. + */ +static inline void SetPageMemcgKmem(struct page *page) +{ + VM_BUG_ON_PAGE(!page->memcg_data, page); + VM_BUG_ON_PAGE(test_bit(PG_MEMCG_OBJ_CGROUPS, &page->memcg_data), page); + __set_bit(PG_MEMCG_KMEM, &page->memcg_data); +} + + #ifdef CONFIG_MEMCG_KMEM /* * page_obj_cgroups - get the object cgroups vector associated with a page @@ -429,6 +465,7 @@ static inline struct obj_cgroup **page_obj_cgroups(struct page *page) VM_BUG_ON_PAGE(memcg_data && !test_bit(PG_MEMCG_OBJ_CGROUPS, &memcg_data), page); + VM_BUG_ON_PAGE(test_bit(PG_MEMCG_KMEM, &memcg_data), page); return (struct obj_cgroup **)(memcg_data & ~MEMCG_FLAGS_MASK); } @@ -445,8 +482,10 @@ static inline struct obj_cgroup **page_obj_cgroups_check(struct page *page) { unsigned long memcg_data = page->memcg_data; - if (memcg_data && test_bit(PG_MEMCG_OBJ_CGROUPS, &memcg_data)) + if (memcg_data && test_bit(PG_MEMCG_OBJ_CGROUPS, &memcg_data)) { + VM_BUG_ON_PAGE(test_bit(PG_MEMCG_KMEM, &memcg_data), page); return (struct obj_cgroup **)(memcg_data & ~MEMCG_FLAGS_MASK); + } return NULL; } @@ -1118,6 +1157,15 @@ static inline void clear_page_mem_cgroup(struct page *page) { } +static inline bool PageMemcgKmem(struct page *page) +{ + return false; +} + +static inline void SetPageMemcgKmem(struct page *page) +{ +} + static inline bool mem_cgroup_is_root(struct mem_cgroup *memcg) { return true; diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index fbbb841a9346..a7ca01ae78d9 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -712,9 +712,8 @@ PAGEFLAG_FALSE(DoubleMap) #define PAGE_MAPCOUNT_RESERVE -128 #define PG_buddy 0x00000080 #define PG_offline 0x00000100 -#define PG_kmemcg 0x00000200 -#define PG_table 0x00000400 -#define PG_guard 0x00000800 +#define PG_table 0x00000200 +#define PG_guard 0x00000400 #define PageType(page, flag) \ ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) @@ -765,12 +764,6 @@ PAGE_TYPE_OPS(Buddy, buddy) */ PAGE_TYPE_OPS(Offline, offline) -/* - * If kmemcg is enabled, the buddy allocator will set PageKmemcg() on - * pages allocated with __GFP_ACCOUNT. It gets cleared on page free. - */ -PAGE_TYPE_OPS(Kmemcg, kmemcg) - /* * Marks pages in use as page tables. */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 69e3dbb3d2cf..1d22fa4c4a88 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3081,7 +3081,7 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) ret = __memcg_kmem_charge(memcg, gfp, 1 << order); if (!ret) { set_page_mem_cgroup(page, memcg); - __SetPageKmemcg(page); + SetPageMemcgKmem(page); return 0; } css_put(&memcg->css); @@ -3106,10 +3106,6 @@ void __memcg_kmem_uncharge_page(struct page *page, int order) __memcg_kmem_uncharge(memcg, nr_pages); clear_page_mem_cgroup(page); css_put(&memcg->css); - - /* slab pages do not have PageKmemcg flag set */ - if (PageKmemcg(page)) - __ClearPageKmemcg(page); } static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) @@ -6890,12 +6886,10 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) nr_pages = compound_nr(page); ug->nr_pages += nr_pages; - if (!PageKmemcg(page)) { - ug->pgpgout++; - } else { + if (PageMemcgKmem(page)) ug->nr_kmem += nr_pages; - __ClearPageKmemcg(page); - } + else + ug->pgpgout++; ug->dummy_page = page; clear_page_mem_cgroup(page); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d4d181e15e7c..6807e37d78ba 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1197,7 +1197,7 @@ static __always_inline bool free_pages_prepare(struct page *page, } if (PageMappingFlags(page)) page->mapping = NULL; - if (memcg_kmem_enabled() && PageKmemcg(page)) + if (memcg_kmem_enabled() && PageMemcgKmem(page)) __memcg_kmem_uncharge_page(page, order); if (check_free) bad += check_free_page(page);