From patchwork Thu Jan 16 14:10:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 11337125 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 17EE792A for ; Thu, 16 Jan 2020 14:10:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CB4C1207FF for ; Thu, 16 Jan 2020 14:10:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="S8rZ/0HN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CB4C1207FF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EFC2C8E006D; Thu, 16 Jan 2020 09:10:30 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EAB528E003F; Thu, 16 Jan 2020 09:10:30 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D9A9F8E006D; Thu, 16 Jan 2020 09:10:30 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id C03DD8E003F for ; Thu, 16 Jan 2020 09:10:30 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 6B1E6181AC9CC for ; Thu, 16 Jan 2020 14:10:30 +0000 (UTC) X-FDA: 76383682620.28.copy16_784982ea24a3f X-Spam-Summary: 2,0,0,5d5d45f1f3c38278,d41d8cd98f00b204,laoar.shao@gmail.com,:dchinner@redhat.com:akpm@linux-foundation.org::laoar.shao@gmail.com,RULES_HIT:41:355:379:541:800:960:967:973:988:989:1260:1345:1437:1535:1543:1711:1730:1747:1777:1792:2194:2198:2199:2200:2393:2525:2559:2563:2682:2685:2731:2859:2895:2903:2915:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3355:3865:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4117:4321:4605:5007:6261:6653:7514:7903:8957:8985:9025:9413:10004:11026:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:14096:14181:14394:14687:14721:21080:21444:21451:21627:21666:21749:21796:21811:21990,0,RBL:209.85.214.196:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:42,LUA_SUMMARY:none X-HE-Tag: copy16_784982ea24a3f X-Filterd-Recvd-Size: 6291 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 14:10:26 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id s21so8372224plr.7 for ; Thu, 16 Jan 2020 06:10:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=6zfT+/jKkoAJAmjjeZxqfTDLY/qF4l0V7xPBaNj2sw4=; b=S8rZ/0HNPnVS2W/H4pUzaum+tQG1liJWwxByUEC33zeIJsOh0tNizPw0XEXdM5yFMF ezPTYYMa+fNd2CGI38OwKjWTCowlKTtRjHAJuterPJuFKg2XzqCe2FxU5clB1/kSxKYF DJrsxWUnWUxw0vp6OkZP/2U0z1ZvC1Bz2ubOH2wDPNJ4IqHLnsHLIF2TKesPkq+bnaO7 R8Ras/HeVRb1uGu2OsS7+NsRTDLFjyKP/dA9NrlquHnQcYSUMAGBwVXCrnWwCqv3EqeW 002XdjIFMiaLsF/bSYs+ML6yHB7ZKEJQARcc6fBUf/hhNfZra0EK0dOAAkPHc1AFOpwH vynQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=6zfT+/jKkoAJAmjjeZxqfTDLY/qF4l0V7xPBaNj2sw4=; b=eAdmfSJcYFa3o3dwlhc36xGWFLWe4+8ldyJ4uvIZd24MzgBGbfXJ/M+2r4XJEhJ8Pw rR9PF3nyLzger8XCmWZ9MosPm7bugOxGG9lU1ypEx3XFnmsPNUjCdAZ9S6bcZxMgQ/5+ lifQeIBqz00ubbcpgPw6FDFIrivfGGrhw5hV9bxd78jtqBKAgXctIGbVF/tdeUc1EhOD u/lVN7zRTTZfp0aO7oARWsHDnM9LFlf8pFGcDR0BAngYrnwDSXGiso4ZDcs+I2F7GIA+ QGZ+bX0Qw9Pg2HiCX5i0oCyA7gmX0Azk8oyhs3xQjDn0BnFd0d7mE/3HGFIaAGD7hdSX X9Vw== X-Gm-Message-State: APjAAAWSqeHMnYmAxEq/MlW/sHUjkaEJ4+f8fWGe4vDbN57UqTOXwQMV LySgAADlM/te0AiOkQ1yT3Y= X-Google-Smtp-Source: APXvYqysHhpeO5ck249Uf04lL63MDgI90BVx63Z3x7Go5ER7FlGXQn776YaQM5wCHUIYlvgnf3O2NA== X-Received: by 2002:a17:90a:1785:: with SMTP id q5mr7042788pja.143.1579183825137; Thu, 16 Jan 2020 06:10:25 -0800 (PST) Received: from dev.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id x11sm24917516pfn.53.2020.01.16.06.10.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Jan 2020 06:10:24 -0800 (PST) From: Yafang Shao To: dchinner@redhat.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, Yafang Shao Subject: [PATCH] mm: verify page type before getting memcg from it Date: Thu, 16 Jan 2020 09:10:11 -0500 Message-Id: <1579183811-1898-1-git-send-email-laoar.shao@gmail.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Per disccusion with Dave[1], we always assume we only ever put objects from memcg associated slab pages in the list_lru. In list_lru_from_kmem() it calls memcg_from_slab_page() which makes no attempt to verify the page is actually a slab page. But currently the binder coder (in drivers/android/binder_alloc.c) stores normal pages in the list_lru, rather than slab objects. The only reason the binder doesn't catch issue is that the list_lru is not configured to be memcg aware. In order to make it more stable, we should verify the page type before getting memcg from it. In this patch, a new helper is introduced and the old helper is modified. Now we have two helpers as bellow, struct mem_cgroup *__memcg_from_slab_page(struct page *page); struct mem_cgroup *memcg_from_slab_page(struct page *page); The first helper is used when we are sure the page is a slab page and also a head page, while the second helper is used when we are not sure the page type. [1]. https://lore.kernel.org/linux-mm/20200106213103.GJ23195@dread.disaster.area/ Suggested-by: Dave Chinner Signed-off-by: Yafang Shao Signed-off-by: Roman Gushchin Acked-by: Yafang Shao --- mm/memcontrol.c | 7 ++----- mm/slab.h | 24 +++++++++++++++++++++++- 2 files changed, 25 insertions(+), 6 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9bd4ea7..7658b8e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -460,10 +460,7 @@ ino_t page_cgroup_ino(struct page *page) unsigned long ino = 0; rcu_read_lock(); - if (PageSlab(page) && !PageTail(page)) - memcg = memcg_from_slab_page(page); - else - memcg = READ_ONCE(page->mem_cgroup); + memcg = memcg_from_slab_page(page); while (memcg && !(memcg->css.flags & CSS_ONLINE)) memcg = parent_mem_cgroup(memcg); if (memcg) @@ -748,7 +745,7 @@ void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val) struct lruvec *lruvec; rcu_read_lock(); - memcg = memcg_from_slab_page(page); + memcg = __memcg_from_slab_page(page); /* Untracked pages have no memcg, no lruvec. Update only the node */ if (!memcg || memcg == root_mem_cgroup) { diff --git a/mm/slab.h b/mm/slab.h index 7e94700..2444ae4 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -329,7 +329,7 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s) * The kmem_cache can be reparented asynchronously. The caller must ensure * the memcg lifetime, e.g. by taking rcu_read_lock() or cgroup_mutex. */ -static inline struct mem_cgroup *memcg_from_slab_page(struct page *page) +static inline struct mem_cgroup *__memcg_from_slab_page(struct page *page) { struct kmem_cache *s; @@ -341,6 +341,23 @@ static inline struct mem_cgroup *memcg_from_slab_page(struct page *page) } /* + * If we are not sure whether the page can pass PageSlab() && !PageTail(), + * we should use this function. That's the difference between this helper + * and the above one. + */ +static inline struct mem_cgroup *memcg_from_slab_page(struct page *page) +{ + struct mem_cgroup *memcg; + + if (PageSlab(page) && !PageTail(page)) + memcg = __memcg_from_slab_page(page); + else + memcg = READ_ONCE(page->mem_cgroup); + + return memcg; +} + +/* * Charge the slab page belonging to the non-root kmem_cache. * Can be called for non-root kmem_caches only. */ @@ -438,6 +455,11 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s) return s; } +static inline struct mem_cgroup *__memcg_from_slab_page(struct page *page) +{ + return NULL; +} + static inline struct mem_cgroup *memcg_from_slab_page(struct page *page) { return NULL;