From patchwork Wed Jan 8 16:03:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 11323983 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4C7CB138C for ; Wed, 8 Jan 2020 16:04:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 17B0D208E4 for ; Wed, 8 Jan 2020 16:04:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="V0ueTeav" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 17B0D208E4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CFE878E0006; Wed, 8 Jan 2020 11:04:11 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C87A08E0001; Wed, 8 Jan 2020 11:04:11 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B4F078E0006; Wed, 8 Jan 2020 11:04:11 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id 9AD1B8E0001 for ; Wed, 8 Jan 2020 11:04:11 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 4E3FA181AEF00 for ; Wed, 8 Jan 2020 16:04:11 +0000 (UTC) X-FDA: 76354938702.17.ant30_4190a82b2cc55 X-Spam-Summary: 2,0,0,5044947540274d49,d41d8cd98f00b204,laoar.shao@gmail.com,:dchinner@redhat.com:hannes@cmpxchg.org:mhocko@kernel.org:vdavydov.dev@gmail.com:guro@fb.com:akpm@linux-foundation.org:viro@zeniv.linux.org.uk::linux-fsdevel@vger.kernel.org:laoar.shao@gmail.com,RULES_HIT:41:69:355:379:541:965:966:967:973:988:989:1260:1345:1437:1535:1542:1711:1730:1747:1777:1792:2196:2199:2393:2525:2553:2559:2563:2682:2685:2693:2859:2897:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4385:4390:4395:4470:4605:5007:6261:6653:7875:7903:8957:9025:9413:10004:11026:11232:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12663:12683:12895:13161:13229:14181:14394:14687:14721:21080:21444:21450:21451:21627:21666:21749:21939:21972:30012:30054:30090,0,RBL:209.85.215.196:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck: none,Dom X-HE-Tag: ant30_4190a82b2cc55 X-Filterd-Recvd-Size: 5073 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Wed, 8 Jan 2020 16:04:10 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id k197so1785182pga.10 for ; Wed, 08 Jan 2020 08:04:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=O/59ll9BT7ySNAyq15TOWkguTOBtHDyowD0yF/qPhwU=; b=V0ueTeav00Zs/6apyRUEiouI7V7VQdeNi9qm1uF7B6dXOmHzeMSifF2Wbe2B+em4H4 IPZSkJ5U/mlSerF+vo3V/oYquB4qLJciNUtk4bO8MUFFCkbKKmtIsYSS8WxDXJVDSzOd uh9TKc69b1U8PiM/wmJTrCobncfxTSOr4B/m8nBbS/Kmxz6+b7fuqMI4BUC0Sj0Qbiz2 ngrxv9ODUkRJhWN/f7tnlME0n7pgitme9EsTbJ/+WZyf4gb9SRMwbFg7G0XwtRra27mG XCB07SczfWAL3E0XJJ6UhRTy/V60u1mRBO/04V4nnyzoGSB3+JqTc3451huS5XsJOnN1 WGKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=O/59ll9BT7ySNAyq15TOWkguTOBtHDyowD0yF/qPhwU=; b=Ce0fY45gH++FyjiT/RT1IhpOX1lr17K3d9sb88QvpsLQ1IWSdTYKfCXrz/3biPrGd1 pxRhTFsfnnD4rsz0diuAnkXV8vtBuQvZ1+/EOv4qVUPL2RauaTVo/+/rT22Ye1BohkIR gEkk/ujlNQpdnHgLv7flf8hAUOIl5bnvQckbVzClWCRZRvkvn6xCj6/N8fS02gua+gxF v6VtFgcaNitUEJa5r8RW0XIlZ/kKxqmGQNtJ504MNv+YpooVM9ALm0M7pVqIFwg3Tn5f ELoFCd+zfAVWaFvjpIqOhz2E0jd/rlctte2yTWSlQlGdPuZ420VX3D6Wl3lILSvvqqdX 8UNw== X-Gm-Message-State: APjAAAXEsEZaSevAXX2gHeJL+4PfzSWvekSgA8bXb8gWuuChKVFc05AJ tbBeq4eN3kGZPek0sClbPI4= X-Google-Smtp-Source: APXvYqxmWPSkt+NymFJi+Eutrvhzt1cCbXhQ5HnVUwWaqPPbFiA6jvhbvAmrbzV4S5I6IbOkvRr2gw== X-Received: by 2002:a63:48d:: with SMTP id 135mr5925819pge.66.1578499449210; Wed, 08 Jan 2020 08:04:09 -0800 (PST) Received: from dev.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id d22sm4079894pfo.187.2020.01.08.08.04.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 08 Jan 2020 08:04:08 -0800 (PST) From: Yafang Shao To: dchinner@redhat.com, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, guro@fb.com, akpm@linux-foundation.org, viro@zeniv.linux.org.uk Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Yafang Shao Subject: [PATCH v3 0/3] protect page cache from freeing inode Date: Wed, 8 Jan 2020 11:03:54 -0500 Message-Id: <1578499437-1664-1-git-send-email-laoar.shao@gmail.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000009, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On my server there're some running MEMCGs protected by memory.{min, low}, but I found the usage of these MEMCGs abruptly became very small, which were far less than the protect limit. It confused me and finally I found that was because of inode stealing. Once an inode is freed, all its belonging page caches will be dropped as well, no matter how may page caches it has. So if we intend to protect the page caches in a memcg, we must protect their host (the inode) first. Otherwise the memcg protection can be easily bypassed with freeing inode, especially if there're big files in this memcg. The inherent mismatch between memcg and inode is a trouble. One inode can be shared by different MEMCGs, but it is a very rare case. If an inode is shared, its belonging page caches may be charged to different MEMCGs. Currently there's no perfect solution to fix this kind of issue, but the inode majority-writer ownership switching can help it more or less. - Changes against v2: 1. Seperates memcg patches from this patchset, suggested by Roman. A separate patch is alreay ACKed by Roman, please the MEMCG maintianers help take a look at it[1]. 2. Improves code around the usage of for_each_mem_cgroup(), suggested by Dave 3. Use memcg_low_reclaim passed from scan_control, instead of introducing a new member in struct mem_cgroup. 4. Some other code improvement suggested by Dave. - Changes against v1: Use the memcg passed from the shrink_control, instead of getting it from inode itself, suggested by Dave. That could make the laying better. [1] https://lore.kernel.org/linux-mm/CALOAHbBhPgh3WEuLu2B6e2vj1J8K=gGOyCKzb8tKWmDqFs-rfQ@mail.gmail.com/ Yafang Shao (3): mm, list_lru: make memcg visible to lru walker isolation function mm, shrinker: make memcg low reclaim visible to lru walker isolation function memcg, inode: protect page cache from freeing inode fs/inode.c | 78 ++++++++++++++++++++++++++++++++++++++++++++-- include/linux/memcontrol.h | 21 +++++++++++++ include/linux/shrinker.h | 3 ++ mm/list_lru.c | 47 +++++++++++++++++----------- mm/memcontrol.c | 15 --------- mm/vmscan.c | 27 +++++++++------- 6 files changed, 143 insertions(+), 48 deletions(-)