From patchwork Sun Dec 15 07:34:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13908677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DDC2E7717F for ; Sun, 15 Dec 2024 07:34:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C2B4D6B0088; Sun, 15 Dec 2024 02:34:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BB4136B0089; Sun, 15 Dec 2024 02:34:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DDEC6B008A; Sun, 15 Dec 2024 02:34:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7805B6B0088 for ; Sun, 15 Dec 2024 02:34:47 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id ED6B9B2D5F for ; Sun, 15 Dec 2024 07:34:46 +0000 (UTC) X-FDA: 82896380322.13.49AD7DF Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by imf09.hostedemail.com (Postfix) with ESMTP id 1FB9214000C for ; Sun, 15 Dec 2024 07:34:25 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=aOr0il3p; spf=pass (imf09.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.214.171 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734248072; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nnrugFdC3r8YCyEypgMXZEaM6o9iaV5/8hvQh3Z31p8=; b=ZsonLxyO2JCI9fJmYMTGtBLLyYOyZcz/l1aMpNAHxyX6+xOds6nWzno9y7xZflJ4y3WMB4 EFAEL6yM8sl9A92pn1DUViCDsAQ9+DDHC+EfyWrvF3ggYunJWTAmzPj6sf3AwyJbsZsivb cLSjOSaMXdUW0/NGm9XVhkXSzFBFg6Q= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=aOr0il3p; spf=pass (imf09.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.214.171 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734248072; a=rsa-sha256; cv=none; b=sQJCNcRtRLczzLHTaTu+vMuwvtQnimMfQ1DKxm7NSQq6Lv2I/q8kk8aqGxMCZm0kz6XezP JGJy+HDdRzFCBD7T3R/of3zrSnAi4HguD7tzDMFwejLAtocFcIsWfcl3jRvRBFaZSY524l MXghF9LjihAk14egJTcrYXn/7SI1GjM= Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-2163bd70069so31500595ad.0 for ; Sat, 14 Dec 2024 23:34:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1734248084; x=1734852884; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nnrugFdC3r8YCyEypgMXZEaM6o9iaV5/8hvQh3Z31p8=; b=aOr0il3picSWXjB8C0EPqwTig3Ydj0o7f9VDnoc1Dlnk+XlTXn+rvb7KZsK1CgxvjE UeqSmpj4uYizP8/TxFxHWzk16nr3e4jlAi43W8ZqOwEYCksHT8Wy/4xCGG9xNBp+4TJD VQdy1jFZ2K1qwx+SwfNrNJvIz8nwQ2R1hUGB1H1iYWzFbI+7Kh+23Md1ar0ubkhRha9g gOXCxHPOZcAHUnbxqReT+s1CHgz8DSs545DeoP7C5CJzChAMFwziz/jemmrXgxkOWaJ3 NvCGsgWyjR3m+JOjWcyQb0L+r1gRurmQGTpcXurVHVHYUk5+yKI8rgxdf62EM2H8m/+/ Hjog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734248084; x=1734852884; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nnrugFdC3r8YCyEypgMXZEaM6o9iaV5/8hvQh3Z31p8=; b=BC9FbneB2DLB9DUhP31T1avy3iKMJVU/vIrnq2xbbmy+Vgq3KvgJJfBYijLH/A1ERA wLMMN2WeL5VSpiUDGd5oicw3bnIek34adgRDq54pnoqtaOTFfUo3Ofefo56pgSI8MBFW JLlOaNKpJFxn7ouSz63/pHXSaY0mOiIhHXXKKmJt/lrmtHM5svlsVqDO4rKg6gs1bVsE i8/ICc44LgmnPEkp+1cF/y1k60YlX7FN04skcgDBx3oglhP2OXl11EVcop1nTtWaX8Lo 0bdsDEG3tcaxPJD5HbZRI+pzIW7F7UezE4LNnwVnLQukOQzglyit2gGcNOuGDgWMuPRa orGw== X-Gm-Message-State: AOJu0Yx57X2tC/GZPbpcCgi+sD8Vs3gPdYTtgz8B66VnMeSnBJf6E6Fg Rc+TLGlMflkArY8tOwtFR4w7KT7b4IB+brzMsDvBzSH03bblZ5Bw X-Gm-Gg: ASbGncstW9f+7IzKV6JwdlqUsV6b+kPetgnxN+AxwYb81dJKydYFPC839MKiz8RnxWm k1sJBhncAN8t3KGLhUlpvb5fIvh+jL9SXNrLYRuag7Mj9+1s1jzwnelGc25USqiPUZD8jomovm/ y5vQB2boFvYe5Vxy3PYFJd5HD6nbaTcoS14eI3DvuSKYxheNBHtDbzKn5NrRizgxE7ccCjEEuAq 5WeRhgokpTJ5dDv4v8vOLPsybLPHLLtbwgLpVIoUwRGqMJMmsoSFGCjzP7UrFK9Q8r9Vp8uI+rP /CyAbwg= X-Google-Smtp-Source: AGHT+IHYj0YieS9mhDD2eTojwO8iqJtJfGi+R8Qc8bLvkoTNFSsbETAQccwhdFh8rLCRzx5NOL+rUg== X-Received: by 2002:a17:903:32cf:b0:215:6e01:ad19 with SMTP id d9443c01a7336-218929ee89emr135945685ad.29.1734248083936; Sat, 14 Dec 2024 23:34:43 -0800 (PST) Received: from localhost.localdomain ([180.159.118.224]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2f142daea5esm5933149a91.17.2024.12.14.23.34.41 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sat, 14 Dec 2024 23:34:43 -0800 (PST) From: Yafang Shao To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, akpm@linux-foundation.org Cc: linux-mm@kvack.org, Yafang Shao Subject: [RFC PATCH 2/2] mm: Add support for nomlock to avoid folios beling mlocked in a memcg Date: Sun, 15 Dec 2024 15:34:15 +0800 Message-Id: <20241215073415.88961-3-laoar.shao@gmail.com> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) In-Reply-To: <20241215073415.88961-1-laoar.shao@gmail.com> References: <20241215073415.88961-1-laoar.shao@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1FB9214000C X-Stat-Signature: mf3ymcfdt7o5jz55ho7ekpsk4g3tkdrq X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1734248065-452167 X-HE-Meta: U2FsdGVkX1+lAbB8t3L0zUTwY7PNo7n69MGJDZc4QGPvz/jzERWy0hzGc/pAwJ9eY3AeNduQ9zqoe/306gWmm2JgmYUoHrLIDSmGOmUP/ix/lnPW9GCSW0VgsFDRz7Vs7Zeu9O/q+r1itsiwSI8dOEggLdTVgUI8x6+Um09GEfBr1QFe23RjGFGGaLzodm3e/WQaOrfS6QZK+HNeJPS4KsZ2voM3CwJ14pa0+M6GRBJzzLZYMGC89xtCfrOZDavHbp/d5SyciuCjb7F/210HBAzpuR0aFMI2xe930Yw7nn3Vnk8yxwQBgEKZCCVpjreU+H0l6nn5ZYU8oo7768Zx/b9PYmKgKzWaJlEDjYrTRKySIaBv8mTftGTp1SvH+0Fi37JMd8ooC7IhMqpOZJQcQHNgVEFc+BSqxpJObNWoHq910r93dW26LJA11Nfqmx935MyyCIn35wBNp9Rbss2oibBCartViAgbdOm/G+rGAJdkrGfeIIi5DuJNxyxGlbVNz317C4E1suLvr2CZ5p5NOyoI8RlbP6FRisYr2vHFftkjhpEyK8ZYAi2HVOnuR8lOoBXRt+tIwPpeu2RHIqOYWk+SQqB1dpa7kvOa15qlQL03W/zwpKDqjTlgyq6/6BOgjCtlc1cNgsjJxQ1Gg9OQea+oCHkWOCtXXa1Fl1iFrXIziNMxqbzc4LPUtCsxckkBsiQfG1KkDnmXWGvHUUpC2mOllJXoT7Jn12NYX3rzLw2Gf8KhtWTkhzMzvSIAL3IsPQQ6LV5bv+fiAgynux2UJqY/6QtQmmIH3h5V5ce0WxivHwWff3MLzCSuOWOajOFv5UgrL/PnNj4moUelNDmX1nZk08l6nnu6LEDLc+s9EpcT6PJoo19+BBu3LQi6uwLEdCPpM42Yhf6RMNKfv1J2tkFrxMzPaHiK+AbwMOShIc0Z5y1CEPMWxw5PwsUmVPaLqT0FKHAFdlAbScVDsOJ VtH1YS4u 4gpyXVms70yAtFC4nTS2BO9TIH/lBBc2Rnj3E540yHhRnWcrB+s417dBCACDdcs8txjWsIjgUQHBa6W21im1RhgwIhhGxNX3PY3w8o/P1c7iuQ4QPaiTIB41vsla5ZXKtcOPBzRNOsi3jfNwn+WVDLKpLU2yOrPYaKLWmSDNUwHN7QAHidSvRXLTA4I7OwmljCrS8r2PXmeSJWuEAUdmjU8RBDnbTexPPteutuRZCuLLvvdCRwdrjp1y2JCwFphIzY8oBUpnJoGj0IgNIBSDNtpElXmR4wKpNU78fik9+hFdQmtc1TSaTJmt6N2gkJl9AzZ278YQFdN4F+nkIfNFSG0DcPh8jqjhUlaEUWOkjdoNUuhBnFsJb/7B39lgC/i30MTwdRqRZ0DWsooFZrqpNZEhki5JxJP5NWz4diCAkTq1zntXBKDGL/9RXh3pb4U6vmcj3pSdNnw9iMCujC0ZqVO3NB8RveJcnHUj/gRDH3B/CMFso4BaD7elVsWNpGqGD0HmT/I0sfrrXBmnZbFYa/foPLxtZLGreHz+SjR413XGZW6o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The Use Case ============ We have a scenario where multiple services (cgroups) may share the same file cache, as illustrated below: download-proxy application \ / /shared_path/shared_files When the application needs specific types of files, it sends an RPC request to the download-proxy. The download-proxy then downloads the files to shared paths, after which the application reads these shared files. All disk I/O operations are performed using buffered I/O. The reason for using buffered I/O, rather than direct I/O, is that the download-proxy itself may also read these shared files. This is because it serves as a peer-to-peer (P2P) service: download-proxy of server1 <- P2P -> download-proxy of server2 /shared_path/shared_files /shared_path/shared_files The Problem =========== Applications reading these shared files may use mlock to pin the files in memory for performance reasons. However, the shared file cache is charged to the memory cgroup of the download-proxy during the download or P2P process. Consequently, the page cache pages of the shared files might be mlocked within the download-proxy's memcg, as shown: download-proxy application | / (charged) (mlocked) | / pagecache pages \ \ /shared_path/shared_files This setup leads to a frequent scenario where the memory usage of the download-proxy's memcg reaches its limit, potentially resulting in OOM events. This behavior is undesirable. The Solution ============ To address this, we propose introducing a new cgroup file, memory.nomlock, which prevents page cache pages from being mlocked in a specific memcg when set to 1. Implementation Options ---------------------- - Solution A: Allow file caches on the unevictable list to become reclaimable. This approach would require significant refactoring of the page reclaim logic. - Solution B: Prevent file caches from being moved to the unevictable list during mlock and ignore the VM_LOCKED flag during page reclaim. This is a more straightforward solution and is the one we have chosen. If the file caches are reclaimed from the download-proxy's memcg and subsequently accessed by tasks in the application’s memcg, a filemap fault will occur. A new file cache will be faulted in, charged to the application’s memcg, and locked there. Current limitations ================== This solution is in its early stages and has the following limitations: - Timing Dependency: memory.nomlock must be set before file caches are moved to the unevictable list. Otherwise, the file caches cannot be reclaimed. - Metrics Inaccuracy: The "unevictable" metric in memory.stat and the "Mlocked" metric in /proc/meminfo may not be reliable. However, these metrics are already affected by the use of large folios. If this solution is deemed acceptable, I will proceed with refining the implementation. Signed-off-by: Yafang Shao --- mm/mlock.c | 9 +++++++++ mm/rmap.c | 8 +++++++- mm/vmscan.c | 5 +++++ 3 files changed, 21 insertions(+), 1 deletion(-) diff --git a/mm/mlock.c b/mm/mlock.c index cde076fa7d5e..9cebcf13929f 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -186,6 +186,7 @@ static inline struct folio *mlock_new(struct folio *folio) static void mlock_folio_batch(struct folio_batch *fbatch) { struct lruvec *lruvec = NULL; + struct mem_cgroup *memcg; unsigned long mlock; struct folio *folio; int i; @@ -196,6 +197,10 @@ static void mlock_folio_batch(struct folio_batch *fbatch) folio = (struct folio *)((unsigned long)folio - mlock); fbatch->folios[i] = folio; + memcg = folio_memcg(folio); + if (memcg && memcg->nomlock && mlock) + continue; + if (mlock & LRU_FOLIO) lruvec = __mlock_folio(folio, lruvec); else if (mlock & NEW_FOLIO) @@ -241,8 +246,12 @@ bool need_mlock_drain(int cpu) */ void mlock_folio(struct folio *folio) { + struct mem_cgroup *memcg = folio_memcg(folio); struct folio_batch *fbatch; + if (memcg && memcg->nomlock) + return; + local_lock(&mlock_fbatch.lock); fbatch = this_cpu_ptr(&mlock_fbatch.fbatch); diff --git a/mm/rmap.c b/mm/rmap.c index c6c4d4ea29a7..6f16f86f9274 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -853,11 +853,17 @@ static bool folio_referenced_one(struct folio *folio, DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); int referenced = 0; unsigned long start = address, ptes = 0; + bool ignore_mlock = false; + struct mem_cgroup *memcg; + + memcg = folio_memcg(folio); + if (memcg && memcg->nomlock) + ignore_mlock = true; while (page_vma_mapped_walk(&pvmw)) { address = pvmw.address; - if (vma->vm_flags & VM_LOCKED) { + if (!ignore_mlock && vma->vm_flags & VM_LOCKED) { if (!folio_test_large(folio) || !pvmw.pte) { /* Restore the mlock which got missed */ mlock_vma_folio(folio, vma); diff --git a/mm/vmscan.c b/mm/vmscan.c index fd55c3ec0054..defd36be28e9 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1283,6 +1283,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (folio_mapped(folio)) { enum ttu_flags flags = TTU_BATCH_FLUSH; bool was_swapbacked = folio_test_swapbacked(folio); + struct mem_cgroup *memcg; if (folio_test_pmd_mappable(folio)) flags |= TTU_SPLIT_HUGE_PMD; @@ -1301,6 +1302,10 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (folio_test_large(folio)) flags |= TTU_SYNC; + memcg = folio_memcg(folio); + if (memcg && memcg->nomlock) + flags |= TTU_IGNORE_MLOCK; + try_to_unmap(folio, flags); if (folio_mapped(folio)) { stat->nr_unmap_fail += nr_pages;