From patchwork Tue Dec 17 11:29:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 11297343 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 23A4C1580 for ; Tue, 17 Dec 2019 11:31:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E2D6C21835 for ; Tue, 17 Dec 2019 11:31:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ufvI3gkH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E2D6C21835 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2DB0B8E0055; Tue, 17 Dec 2019 06:31:19 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 289F28E0040; Tue, 17 Dec 2019 06:31:19 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17AD28E0055; Tue, 17 Dec 2019 06:31:19 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id F29EF8E0040 for ; Tue, 17 Dec 2019 06:31:18 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id A05B045B4 for ; Tue, 17 Dec 2019 11:31:18 +0000 (UTC) X-FDA: 76274417436.28.guide57_287ab9091ee22 X-Spam-Summary: 2,0,0,fa04ef4c5c178e00,d41d8cd98f00b204,laoar.shao@gmail.com,:hannes@cmpxchg.org:mhocko@kernel.org:vdavydov.dev@gmail.com:akpm@linux-foundation.org:viro@zeniv.linux.org.uk::linux-fsdevel@vger.kernel.org:laoar.shao@gmail.com,RULES_HIT:41:355:379:541:965:966:988:989:1260:1345:1437:1534:1541:1711:1730:1747:1777:1792:2196:2199:2393:2553:2559:2562:2693:2897:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3872:3874:4385:4390:4395:4470:4605:5007:6261:6653:7903:8603:9413:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12517:12519:12683:12895:13069:13138:13161:13229:13231:13311:13357:13618:14096:14384:14394:14687:14721:21080:21444:21450:21451:21627:21666:30054:30070:30090,0,RBL:209.85.215.194:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: guide57_287ab9091ee22 X-Filterd-Recvd-Size: 4407 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Tue, 17 Dec 2019 11:31:18 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id x8so5515516pgk.8 for ; Tue, 17 Dec 2019 03:31:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=rRGWPvPKQxDSI3L05y8pg7cfFA9aWltFsnG1oiRzMSI=; b=ufvI3gkH+9OKpLMdLevFrRm9lcWeTZUpz+Z/efo5jNWIwgnAPFItIq3jOxJ4XPioGq qOeERq1V6dk6xnKz5YgZM1WM1H6mwgJKQq6Ur0+chmVbLaJqUGyyXAx0sEa68XkPii5m siNQ4Av1MAij5Pv7I1YJuFhTZTSWemR896uD/vva0uOpLfMIkIJbG/zlroTJpllp1oZ+ zw0Mqs+vBn1/dbnr2q51oc/vPP1hCXV3+N2bOVpuBobzeHbc4ZulN0Rm2NydmPgFLt2Q FbwUAjBeNPTZZRE3pKDP/sDPrbmie8WPHoEwEWFaJZ/wgKPqzUe4s2zohziRq+XuI67a /mQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=rRGWPvPKQxDSI3L05y8pg7cfFA9aWltFsnG1oiRzMSI=; b=jlCYEdBlblylHXjZl8JYox6VpM3Au2Gtk6EjrBV7JQEWoGUEnPaQIG4YaR0uVgbWdn KfzMGFlu1emYs/8bpN7p+gx6qYmY2ZR/syAEYBHIccgvMCCyI7VAEbkb4xQGCaw3Kgxg AkXPBsifNwKtthbFmZ+v/0DO5mEc5riA5xPpgrmdCz2sluDgoGcP5j1mlsqaaFTPVEXQ WkukTN4i9flBf+gFJKyHg907a/2lUFFrq7bDRcKFuLONp680+XgBT6YVjA+qYG8E+joA u7C9VBG+SINYnUh7NGTQZj+YC3EElYX/Tp9JAPVdpoljPSjWnbYJcLMaiV15D5DntWVS KCiQ== X-Gm-Message-State: APjAAAXMyiW6wRQzUzz/QKCSJdhaqfwdoDY1glOPDm9R91PDY0/utuct y4fwb+oghy9RMaCeLhKcWy8= X-Google-Smtp-Source: APXvYqyp+i9kvmEX2JEjfrmFyef0uS18tCLpW9/UU5KBdoi5qLrph49nr7tHWXdQ3q3bNbFFdYxkzw== X-Received: by 2002:a62:c583:: with SMTP id j125mr22647932pfg.27.1576582276983; Tue, 17 Dec 2019 03:31:16 -0800 (PST) Received: from dev.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id q21sm26246460pff.105.2019.12.17.03.31.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 17 Dec 2019 03:31:16 -0800 (PST) From: Yafang Shao To: hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, akpm@linux-foundation.org, viro@zeniv.linux.org.uk Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Yafang Shao Subject: [PATCH 0/4] memcg, inode: protect page cache from freeing inode Date: Tue, 17 Dec 2019 06:29:15 -0500 Message-Id: <1576582159-5198-1-git-send-email-laoar.shao@gmail.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.002387, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On my server there're some running MEMCGs protected by memory.{min, low}, but I found the usage of these MEMCGs abruptly became very small, which were far less than the protect limit. It confused me and finally I found that was because of inode stealing. Once an inode is freed, all its belonging page caches will be dropped as well, no matter how may page caches it has. So if we intend to protect the page caches in a memcg, we must protect their host (the inode) first. Otherwise the memcg protection can be easily bypassed with freeing inode, especially if there're big files in this memcg. The inherent mismatch between memcg and inode is a trouble. One inode can be shared by different MEMCGs, but it is a very rare case. If an inode is shared, its belonging page caches may be charged to different MEMCGs. Currently there's no perfect solution to fix this kind of issue, but the inode majority-writer ownership switching can help it more or less. This patchset contains four patches, in which patches 1-3 are minor optimization and also the preparation of patch 4, and patch 4 is the real issue I want to fix. Yafang Shao (4): mm, memcg: reduce size of struct mem_cgroup by using bit field mm, memcg: introduce MEMCG_PROT_SKIP for memcg zero usage case mm, memcg: reset memcg's memory.{min, low} for reclaiming itself memcg, inode: protect page cache from freeing inode fs/inode.c | 9 +++++++ include/linux/memcontrol.h | 37 ++++++++++++++++++++++------- mm/memcontrol.c | 59 ++++++++++++++++++++++++++++++++++++++++++++-- mm/vmscan.c | 10 ++++++++ 4 files changed, 104 insertions(+), 11 deletions(-)