From patchwork Wed Mar 27 21:31:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13607486 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01C85C54E67 for ; Wed, 27 Mar 2024 21:31:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 621716B0083; Wed, 27 Mar 2024 17:31:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D2CE6B0085; Wed, 27 Mar 2024 17:31:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 424D76B0088; Wed, 27 Mar 2024 17:31:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 23AAF6B0083 for ; Wed, 27 Mar 2024 17:31:33 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id D32B11A0403 for ; Wed, 27 Mar 2024 21:31:32 +0000 (UTC) X-FDA: 81944115624.22.75D5250 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf11.hostedemail.com (Postfix) with ESMTP id 0AF1040014 for ; Wed, 27 Mar 2024 21:31:30 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=SOR3vBJW; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 3MpAEZgcKCJsTP5I7CPBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--yuanchu.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3MpAEZgcKCJsTP5I7CPBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--yuanchu.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711575091; a=rsa-sha256; cv=none; b=BadnB7G/+XhcZpAgd9pyG9Y/Jnoyd2o1tqchzo4iuQs8tto4ylHmHrfgWklPmjNR6UK2nN pRpGWyIvKkh2qWm2pF3+aNTUD7WxF+hdjTiNgMwwZyyE4ygqH7bVVkUxxwQBHI5Y4K22Na b7fkimJTMy5O3pEpDEOZIb1QqJriM9g= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=SOR3vBJW; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 3MpAEZgcKCJsTP5I7CPBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--yuanchu.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3MpAEZgcKCJsTP5I7CPBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--yuanchu.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711575091; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=J7Eobqr0LFBQ4KWjEX1Ii2cj/j9a00dqkQu19UHFfuM=; b=Vl2iQ4YgN4x9SUu1dqhsRCUhlMSyVaYZskL7COc9M2i0npS3dc7GQA/8l/7/dghQB153L3 K0utfPwffZYVYMHHs4xgqEhrC1JeDmc02HibeM2V1stja31HH5j80cmxrAK2gWWKW6ym1l dlfNlkeHqFxrHP9Qc4/GaqTq9KMqSQ0= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dd169dd4183so335757276.3 for ; Wed, 27 Mar 2024 14:31:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711575090; x=1712179890; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=J7Eobqr0LFBQ4KWjEX1Ii2cj/j9a00dqkQu19UHFfuM=; b=SOR3vBJW9rtlPwMVHg101mgYQM9mOBR+twEIaPoVHUIjOUYLGHXotjHmYaHWIFiKKi hbVqX0RNnbtiPA/9qVDYNoguFFxx+2vngI8QiY/gBBYSUvYTCVE55b9abCuxfHORiEBc GRz/QdYspWT+nZXnLmwRy2tQjFv0tLguz7GG4bLQI8ziT3C8MNTUc7F1TP30HzedtbbJ hA1ER+bTfgVEkbky9XmtD/+fUUGReJtTzLNLfXpB4WAcQO/Yeo3ocOb/WCctZUNqBW2u Y0a9G0zJd+m4T/L6xfe6fe5NRxM315e9DM0TsAQIcvwp5CkGgaUG2EdB3bGzASnNthWS 7e4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711575090; x=1712179890; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=J7Eobqr0LFBQ4KWjEX1Ii2cj/j9a00dqkQu19UHFfuM=; b=f5hnETj9hrZZS+omxr4hey61ogq9nuBTo7Q5ViiWF1QFQY1n39qABKV4VctnD1cM4C AQzG7m+/aXOyF4sSaFKJnbCmniAvmx4Ehg+RlRkok5H1KaF227W91Fz+Zw0k3M6Eqb3q xUYC7jQdK/tuyvrD6NbLJkIUlJA+GYKL5D0n0yUAtNMbMFr1YgagsuSB4ARUg2jX9oFo sPz7UZ1L/63i7Sz5ujZxZRt+W0TdBWFLp2+pM7L5nPQIqqi4w/HJxb2gb28a4i5ISldW 9DysZ4YPRAe+HpjAnJpo/ekSh+9D5pn2NXtP0jXIsEE2jKVeOsiPgCGQCVwm9PlWQ0DJ 28/g== X-Forwarded-Encrypted: i=1; AJvYcCVm7DkqUuBDv955XCPc0EO7SV0cyai1WO0nObiuDaON8BjnBn08ptKr79B489877KqZ9t2goYt5ZNHyAcVM0mx+OVQ= X-Gm-Message-State: AOJu0YzisSxIWAipsVF2u4TIRwEKFMW9CPvE9hgi93AIxAoV+KMOzHvK QYvDValsSnhsxSoTZyKMMYrbgyMrz+a0eabB00RF8+URP/WCHqMoQhHhvdu/4I3aPVXx2x02QOW R4uHumA== X-Google-Smtp-Source: AGHT+IEtjKFm0aB/GkGE7TLyxEpzw6bgXLiicKkfzUR/Ki0/1kiBkWenomFbSpLy63KHaGIYhCLu0OSP/VuU X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:6df3:ef42:a58e:a6b1]) (user=yuanchu job=sendgmr) by 2002:a05:6902:2311:b0:dbe:d0a9:2be8 with SMTP id do17-20020a056902231100b00dbed0a92be8mr116972ybb.0.1711575090119; Wed, 27 Mar 2024 14:31:30 -0700 (PDT) Date: Wed, 27 Mar 2024 14:31:00 -0700 In-Reply-To: <20240327213108.2384666-1-yuanchu@google.com> Mime-Version: 1.0 References: <20240327213108.2384666-1-yuanchu@google.com> X-Mailer: git-send-email 2.44.0.396.g6e790dbe36-goog Message-ID: <20240327213108.2384666-2-yuanchu@google.com> Subject: [RFC PATCH v3 1/8] mm: multi-gen LRU: ignore non-leaf pmd_young for force_scan=true From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 0AF1040014 X-Stat-Signature: 9fbnj1auhfa4jesap5fdyiz75t9qkbia X-HE-Tag: 1711575090-162338 X-HE-Meta: U2FsdGVkX18wdF69okWJLDc741SdyKXtbgO/J/5EITEvLgMHRnmFmusxQfK6gbBL0fWIEhXC75uFB+GKle6ULF65tPKuFBMwwEEBD3dHTT0vjAlPbA08p3gVmbXoEQXdGlCl5Au0i6rQkeHDcyPJ4ttI4kuygfEN96y7u3oEjJqjVCXtayUKAx4QcRZyw01kY2UMiUK2Y/LA+UdkEeeqekWNDgwZ1E2wTDi4TK9ZZk2DureBN3v8OWZzfb7o1B9LM0elDuozXwJiykHxRu0mWvGx2u4cdC7h8UpWz6KdLlE+bpYxeFC5JfV/3ivHuJTxZySE56d9CQOUo8Qf4AWzmYyM3PVKa+3GjUL2J1L3Jt+r97JDFGAKz9b5Ihe9Vxw3g4KEuZiAWV/J0fmuQG9XWoF5l6KNHpOwcK5tuYXKX+aTT1IktsKNg8/cVsdHgI3fcwyyZofKV4cqTT4hWWM49Oq1r714VUlaMz0XUUmbuGLBuZ5mAz0bxkzHR9cMWgUHYnd3dq7OlgjBC0jpMwEtm86F2S7qfZS3qwcK8umxtDKixLXzS6MB7E7bdio4lcD2vAQJ5EU3wKzT3707pVh85rmTVA1ECFKg7CYZdKUA2NFUpp4aomG/fxwPwvbdY/DRLBnbJirZIpKPN/pBtQytSaqJrvqwA+0r5O8ywfckEOMmtcWGOjr6x5KOxmt8DsVbFnuW6Y7frbo3hPqnvrp47AzY8Ks7sH148yEYngLS8Q0eCnFCG2PRtHbuHTTR8S0BrjW1auNJ1Ii4gGtUyBjN1qaiqkGyvNCRE4dz367qflmOYzZaRfTyZjeF/vkIw3RRq2WrIRui8kYew2KNeEkRqOwkA9EpgheWWvViXPooQAl4z4Lk5L4dRseNr8sVKsl1wRjtCnc0zeiVosKZjP6GNr2NFrf6MDDF57PicqyH9vOtn4HQSMln6BUyfYEpyTAOI1tq1VUir0C53OEFcPi RsP7gbFi exFEOwViT42l1jgjoR/RbSO8UeEMECoz9G5CrAKXSLUfXOyu1HeTqdB7EW+cq77T/w4es3rdEh3YeuNKZWQyQHewhktBsw40Ghq9VHmKnj4jT2nfQC9y0ZP+PgdtUtMdSWvtG86NnEVsgPIHTk+WpKLdVGFPk1wUOB60m8t8Y33mYiKvruizgpBNskPG/izksf9pCbO5M8IUNNIRRCld0bQ6dPQ1MYnq2zC89q1VNbe/LnUkNpl6WybIN7Ee6j/dDMz2bThTwDCdl1oFfupdO6gzickqQmhm0rsHS5BK6Wi2kdGrJ3u3PN3h42qvCaxxcVCiXdS/jWbvS/cYQugNI55HIcsrkiE0Xyg+DfN6T98yzXqFAVjfuR2nkMUOtqVxT7SId+YKjpCo0iZoLiA+aY0B+vTji+wHlDDw7W7JFfjRfy4t+DspUgNJyv2I9w699vbMoxKk1Uz249YDowg2+WVKIDVXKU0DpeVOlajUp9vx9NB0XNlJemZ6dA6UprTnTD3QiPUhE/gG5os/J/uFf5YskaIK4O2a/AHBhIO5vDwbfd2xR7LZG45vbpkzzrWgYqv8PovGF4oBg+EbkkmlU+b1/84tXnVfigsvF8VDmJtAwKdBlGixdt+k/5z3bUPelYmqbIC2RsSfHVh4Zb9U5QY0f78mfE3eWbrPUE4w8r3lcYoNea3FwtACu3QIvgzh2HSJ7kVCGknhdNOp+Edr7VCToVgdfKiieUt0OhdXCA5vg3Ic= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When non-leaf pmd accessed bits are available, MGLRU page table walks can clear the accessed bit and promptly ignore the accessed bit on the pte because it's on a different node, so the walk does not update the generation of said page. When the next scan comes around on the right node, the non-leaf pmd accessed bit might remain cleared and the pte accessed bits won't be checked. While this is sufficient for reclaim-driven aging, where the goal is to select a reasonably cold page, the access can be missed when aging proactively for measuring the working set size of a node/memcg. Since force_scan disables various other optimizations, we check force_scan to ignore the non-leaf pmd accessed bit. Signed-off-by: Yuanchu Xie --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 4f9c854ce6cc..1a7c7d537db6 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3522,7 +3522,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, walk->mm_stats[MM_NONLEAF_TOTAL]++; - if (should_clear_pmd_young()) { + if (!walk->force_scan && should_clear_pmd_young()) { if (!pmd_young(val)) continue; From patchwork Wed Mar 27 21:31:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13607488 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFFBDCD11DD for ; Wed, 27 Mar 2024 21:31:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 55FEB6B0085; Wed, 27 Mar 2024 17:31:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 44D476B0089; Wed, 27 Mar 2024 17:31:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29DE56B008A; Wed, 27 Mar 2024 17:31:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 04AAC6B0085 for ; Wed, 27 Mar 2024 17:31:34 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BC36F16058E for ; Wed, 27 Mar 2024 21:31:34 +0000 (UTC) X-FDA: 81944115708.14.AC043E0 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf15.hostedemail.com (Postfix) with ESMTP id 18B24A0004 for ; Wed, 27 Mar 2024 21:31:32 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=HtEqzazH; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3NJAEZgcKCJ0VR7K9ERDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--yuanchu.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3NJAEZgcKCJ0VR7K9ERDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--yuanchu.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711575093; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wHoDydsLx5fzCgq34CdDr9/Bjf46nA3NHW9iYY15TNs=; b=Cgoc/KkAkxPAaM6vGFF1CEMndx88OXYVyusOjqpx61BXXyY4+ryn751TaWktJTNLtKzPji wxt91L8uh6z3rosf6h40x5WwjxR6PwZ2RsGjLHN/JjZkAFGJ0PKs3nbSTNk5iIWU5t/f9d mcBF3wlq5Dh4SGzikzPiVxFVuFww9S8= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=HtEqzazH; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3NJAEZgcKCJ0VR7K9ERDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--yuanchu.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3NJAEZgcKCJ0VR7K9ERDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--yuanchu.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711575093; a=rsa-sha256; cv=none; b=K60YJFsOhqf2QTdq/IXjMLFaBMXga0RtABLSMVOOoBE2of754itoiIAy8nFLBZDb4zimtd KDycwjRnwuMzi6xTJG9Gkk1l101qX0O0egtEN+bJxQ8XASE0tbRPcxVoGBCkL7J25ojSFO HBNmt/VTo957z6nvBNkpj7usATLlwTI= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-60cc8d4e1a4so5159697b3.3 for ; Wed, 27 Mar 2024 14:31:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711575092; x=1712179892; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wHoDydsLx5fzCgq34CdDr9/Bjf46nA3NHW9iYY15TNs=; b=HtEqzazHTk5pGVEEt7k6zUG3peWA9qJ/DuCxUgjNnb1kvP27d3VlpMlENTYC5LazyZ DIih2fEn3sjM9zRPqItfIbY8wlCTJaO4CoSlLnbv1HQye2GWcAUVWsF8WK/S+/pTTY16 iG9bECHyyyXGVAo92o+ihrpm4wqKJSf6D9UIVFt3nPfv2jndP6AslF9aP8FAeMI2LlWG m1v/b7+SCbnFyzppxz6xDQQJhuQ/+5T4NLq4hasCi90J1Mn2SvVi2Ue0EOWBGVM6dLbU L+aLTh5EOL+uQJvE8ao/MjdRkCq/zlxwHpW8ZlkV95K3DnKuYJpps1IkGl5jfsuMSYxO Yllg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711575092; x=1712179892; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wHoDydsLx5fzCgq34CdDr9/Bjf46nA3NHW9iYY15TNs=; b=lZiutIgnJSTBJXt6uCn3sGR9fxScXESfVQxGwwAIcTnxWXf59xXBngNgcsflYHi/k+ zdUwADQRCT1lBi2osRYo0O6bv9P3TB64m6MEGGKt/xw5hR0E/FRkXyuRVErF7mxe8AuD Fo456ayoL9+1emrstjlFgl52eFKIp5dx88IvY9/FIS75PYs25QGeWfTcc2TrEXwAQ4Oc okzqRVuuQdFm/ekw+O4AUJAjur09sOhwSzHZeMkwx3LeYT6nOewkuGFxuyTacaNt0Vnz kjAD+jQ3P0EH7GHOPcMTxPCVTK2qppqxlO8a8+mukdFpE4CYn9IbQdpzpnmP/vO7bVxy dYWg== X-Forwarded-Encrypted: i=1; AJvYcCXlVaWoxafAbJqeExAIjfRaEGR6P6dlAuWfa3NKAtN1qWn75Xgg2oIVdEPTkjiGm1t5xwqaidjOtJa3tHw3Lo4DZZ8= X-Gm-Message-State: AOJu0YwsJGVwHugPDjhnxehnsK0VGFhDHEla7PrZ0rGSdmm55QlfcydN 23JfzCse950itsIKJIyn4INDmcvUpU50cun2459EWXfSjNKUEynV/FGo/wQPYFBD26Wa2PypyRF aFom8Uw== X-Google-Smtp-Source: AGHT+IHAO+2WDgh1+3HZ2CbnZvR4JcPfdViRiPfSKuxQAo52I4fv3S3CbGxcVvNyzCJyKIgeCiCFaX7DkQJs X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:6df3:ef42:a58e:a6b1]) (user=yuanchu job=sendgmr) by 2002:a05:6902:709:b0:dc7:7ce9:fb4d with SMTP id k9-20020a056902070900b00dc77ce9fb4dmr299227ybt.12.1711575092175; Wed, 27 Mar 2024 14:31:32 -0700 (PDT) Date: Wed, 27 Mar 2024 14:31:01 -0700 In-Reply-To: <20240327213108.2384666-1-yuanchu@google.com> Mime-Version: 1.0 References: <20240327213108.2384666-1-yuanchu@google.com> X-Mailer: git-send-email 2.44.0.396.g6e790dbe36-goog Message-ID: <20240327213108.2384666-3-yuanchu@google.com> Subject: [RFC PATCH v3 2/8] mm: aggregate working set information into histograms From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org X-Rspamd-Queue-Id: 18B24A0004 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 3iozisekankx4xhy9hgexi73s9a1bc8o X-HE-Tag: 1711575092-774538 X-HE-Meta: U2FsdGVkX18ZVAuYqHdcM4BnCX8WnhwSxOVpsXzEEA5flpn5jtxhjlGJPstJmjFWF874kVf5VT6dOBIK/iLFOtmXn4rG45QsUqXewXYHEnkN7gTE755q9lgthIWM6c/k8ojseQwrY4xLxlEDr7s31vj9MDEMr72aiCg9QxDG21E0NVI0WhzgHA7Tq7ia9S6+gnpF/jhTbaj1budIexNlSS2I/MEuzOiCuxAOVz+Yr5XNRLHPHaHq2E4Od3TgC1DsYohGMRzpmhHQanIXbvd1dH7ktWealp1QszNIm+lBRXdaVktXzcjmxkbpABEE3R1rIs7KF3vpjzb+UrFNJaMIBQkLObcd7iZ1B+4tfeYXTKQ1xQ8F0yyxxdBgDR1LfiCc8g+S/fVBspZDN1e9DkIVmLo51A7InOIDjRHq6UwQ9bqo+ksxgSRuS+LIt8CCsQ+gzVpuYTjrDOkktoM4LFLBZjSaBD7bVW/fxkyPT5JAZv8rHjcbdaROqhVb6QROdDuDy59jhRmu34kxCmOXgFa4M44Ycz0yWWq4K+kpm/3Y6XH28czAksbuK8UEvuIz8yIg0BvjlxozXIAxdDSBYYVjG4BYiEwYRQ03fkXAqLIwZ0cjJDO7NiBqMiAnPZM0DfGPH8EdXxn82Zj/2ZjXWxIYQlp9lmzaCBLQvPUoKoVKYJij6vyIC2+76zgB5an61qsW+9qqsH5pFpEYr4VGUBo5NWYxON874OuUYUObETkanSauR9UAzg8NgBT3qRWA+bSQILBxB+diAmOxyADpvcek/7T1HbDbBjHfLMlnI8j/MY2BAtMNDv6YrwSX9uF1cp8RetoAsqihs5oIC+ZX4TERKlC7QCqAmYQ/IoQRQRMj+Z+4dkGuLKuD1vn6ihZIbadGAF7V0Ztl2gjNTv9e0HAlrn0ODKvUslIgldCmxFAa6j5hTkMmBJrxXImXLoRySangNx/y21iq54Jb+N/1Nba INPkdoDN m0I79obHVHoOaQvO2pL9YY3vU6Mo9iTBMmTRVG7lHq5/qKRY5a6c6Gyz1LGBjC2c9X3o4BtzZiQKPTSJGRQAuetGVDeZkU8fmFLzWDii6vsNHQ3ZJu+oklDXNMwAwfBQAIBOxthy9YpGTu4OIyPw6NWd/peBHDh1QrnKyU80PaT9U+FWkYM/7HEJLTLgQ+B/mn0Er5tgcz6TOPoLt2kNExDUR21K2fhDhW6LXJD0NeN0ZBRH7pYsI0qf42p9FJkK1Nb6E2MJO0jk2YoJlLBrmw1ZVWVZsfsVFPzYjtqppfH5YeHlAArgFkbJR43z5I83ZafJzhnYTVHH5kdo9Rxf0NoIcvQM0Q0EGMgFdVugOpDYkoBSYaZei+glgqCtPuvFmP7CAACcmanbbg2ic9vxrTVgYYYmEseRxotxt+pE7plus7h8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hierarchically aggregate all memcgs' MGLRU generations and their page counts into working set page age histograms. The histograms break down the system's working set per-node, per-anon/file. The sysfs interfaces are as follows: /sys/devices/system/node/nodeX/page_age A per-node page age histogram, showing an aggregate of the node's lruvecs. The information is extracted from MGLRU's per-generation page counters. Reading this file causes a hierarchical aging of all lruvecs, scanning pages and creates a new generation in each lruvec. For example: 1000 anon=0 file=0 2000 anon=0 file=0 100000 anon=5533696 file=5566464 18446744073709551615 anon=0 file=0 /sys/devices/system/node/nodeX/page_age_interval A comma separated list of time in milliseconds that configures what the page age histogram uses for aggregation. Signed-off-by: Yuanchu Xie --- drivers/base/node.c | 3 + include/linux/mmzone.h | 4 + include/linux/workingset_report.h | 69 +++++ mm/Kconfig | 9 + mm/Makefile | 1 + mm/internal.h | 9 + mm/memcontrol.c | 2 + mm/mmzone.c | 2 + mm/vmscan.c | 34 ++- mm/workingset_report.c | 413 ++++++++++++++++++++++++++++++ 10 files changed, 545 insertions(+), 1 deletion(-) create mode 100644 include/linux/workingset_report.h create mode 100644 mm/workingset_report.c diff --git a/drivers/base/node.c b/drivers/base/node.c index 1c05640461dd..4f589b8253f4 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -20,6 +20,7 @@ #include #include #include +#include static const struct bus_type node_subsys = { .name = "node", @@ -625,6 +626,7 @@ static int register_node(struct node *node, int num) } else { hugetlb_register_node(node); compaction_register_node(node); + wsr_register_node(node); } return error; @@ -641,6 +643,7 @@ void unregister_node(struct node *node) { hugetlb_unregister_node(node); compaction_unregister_node(node); + wsr_unregister_node(node); node_remove_accesses(node); node_remove_caches(node); device_unregister(&node->dev); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index a497f189d988..8839931646ee 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -24,6 +24,7 @@ #include #include #include +#include /* Free memory management - zoned buddy allocator. */ #ifndef CONFIG_ARCH_FORCE_MAX_ORDER @@ -625,6 +626,9 @@ struct lruvec { struct lru_gen_mm_state mm_state; #endif #endif /* CONFIG_LRU_GEN */ +#ifdef CONFIG_WORKINGSET_REPORT + struct wsr_state wsr; +#endif /* CONFIG_WORKINGSET_REPORT */ #ifdef CONFIG_MEMCG struct pglist_data *pgdat; #endif diff --git a/include/linux/workingset_report.h b/include/linux/workingset_report.h new file mode 100644 index 000000000000..0de640cb1ef0 --- /dev/null +++ b/include/linux/workingset_report.h @@ -0,0 +1,69 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_WORKINGSET_REPORT_H +#define _LINUX_WORKINGSET_REPORT_H + +#include +#include + +struct mem_cgroup; +struct pglist_data; +struct node; +struct lruvec; + +#ifdef CONFIG_WORKINGSET_REPORT + +#define WORKINGSET_REPORT_MIN_NR_BINS 2 +#define WORKINGSET_REPORT_MAX_NR_BINS 32 + +#define WORKINGSET_INTERVAL_MAX ((unsigned long)-1) +#define ANON_AND_FILE 2 + +struct wsr_report_bin { + unsigned long idle_age; + unsigned long nr_pages[ANON_AND_FILE]; +}; + +struct wsr_report_bins { + unsigned long nr_bins; + /* last bin contains WORKINGSET_INTERVAL_MAX */ + struct wsr_report_bin bins[WORKINGSET_REPORT_MAX_NR_BINS]; +}; + +struct wsr_page_age_histo { + unsigned long timestamp; + struct wsr_report_bins bins; +}; + +struct wsr_state { + /* breakdown of workingset by page age */ + struct mutex page_age_lock; + struct wsr_page_age_histo *page_age; +}; + +void wsr_init(struct lruvec *lruvec); +void wsr_destroy(struct lruvec *lruvec); + +/* + * Returns true if the wsr is configured to be refreshed. + * The next refresh time is stored in refresh_time. + */ +bool wsr_refresh_report(struct wsr_state *wsr, struct mem_cgroup *root, + struct pglist_data *pgdat); +void wsr_register_node(struct node *node); +void wsr_unregister_node(struct node *node); +#else +static inline void wsr_init(struct lruvec *lruvec) +{ +} +static inline void wsr_destroy(struct lruvec *lruvec) +{ +} +static inline void wsr_register_node(struct node *node) +{ +} +static inline void wsr_unregister_node(struct node *node) +{ +} +#endif /* CONFIG_WORKINGSET_REPORT */ + +#endif /* _LINUX_WORKINGSET_REPORT_H */ diff --git a/mm/Kconfig b/mm/Kconfig index ffc3a2ba3a8c..212f203b10b9 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1261,6 +1261,15 @@ config LOCK_MM_AND_FIND_VMA config IOMMU_MM_DATA bool +config WORKINGSET_REPORT + bool "Working set reporting" + depends on LRU_GEN && SYSFS + help + Report system and per-memcg working set to userspace. + + This option exports stats and events giving the user more insight + into its memory working set. + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index e4b5b75aaec9..57093657030d 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -92,6 +92,7 @@ obj-$(CONFIG_DEVICE_MIGRATION) += migrate_device.o obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += huge_memory.o khugepaged.o obj-$(CONFIG_PAGE_COUNTER) += page_counter.o obj-$(CONFIG_MEMCG) += memcontrol.o vmpressure.o +obj-$(CONFIG_WORKINGSET_REPORT) += workingset_report.o ifdef CONFIG_SWAP obj-$(CONFIG_MEMCG) += swap_cgroup.o endif diff --git a/mm/internal.h b/mm/internal.h index f309a010d50f..5e0caba64ee4 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -198,12 +198,21 @@ extern unsigned long highest_memmap_pfn; /* * in mm/vmscan.c: */ +struct scan_control; bool isolate_lru_page(struct page *page); bool folio_isolate_lru(struct folio *folio); void putback_lru_page(struct page *page); void folio_putback_lru(struct folio *folio); extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason); +#ifdef CONFIG_WORKINGSET_REPORT +/* + * in mm/wsr.c + */ +/* Requires wsr->page_age_lock held */ +void wsr_refresh_scan(struct lruvec *lruvec); +#endif + /* * in mm/rmap.c: */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1ed40f9d3a27..2f07141de16c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -65,6 +65,7 @@ #include #include #include +#include #include "internal.h" #include #include @@ -5457,6 +5458,7 @@ static void free_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node) if (!pn) return; + wsr_destroy(&pn->lruvec); free_percpu(pn->lruvec_stats_percpu); kfree(pn); } diff --git a/mm/mmzone.c b/mm/mmzone.c index c01896eca736..efca44c1b84b 100644 --- a/mm/mmzone.c +++ b/mm/mmzone.c @@ -90,6 +90,8 @@ void lruvec_init(struct lruvec *lruvec) */ list_del(&lruvec->lists[LRU_UNEVICTABLE]); + wsr_init(lruvec); + lru_gen_init_lruvec(lruvec); } diff --git a/mm/vmscan.c b/mm/vmscan.c index 1a7c7d537db6..b694d80ab2d1 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -56,6 +56,7 @@ #include #include #include +#include #include #include @@ -3815,7 +3816,7 @@ static bool inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, return success; } -static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, +bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, struct scan_control *sc, bool can_swap, bool force_scan) { bool success; @@ -5606,6 +5607,8 @@ static int __init init_lru_gen(void) if (sysfs_create_group(mm_kobj, &lru_gen_attr_group)) pr_err("lru_gen: failed to create sysfs group\n"); + wsr_register_node(NULL); + debugfs_create_file("lru_gen", 0644, NULL, NULL, &lru_gen_rw_fops); debugfs_create_file("lru_gen_full", 0444, NULL, NULL, &lru_gen_ro_fops); @@ -5613,6 +5616,35 @@ static int __init init_lru_gen(void) }; late_initcall(init_lru_gen); +/****************************************************************************** + * workingset reporting + ******************************************************************************/ +#ifdef CONFIG_WORKINGSET_REPORT +void wsr_refresh_scan(struct lruvec *lruvec) +{ + DEFINE_MAX_SEQ(lruvec); + struct scan_control sc = { + .may_writepage = true, + .may_unmap = true, + .may_swap = true, + .proactive = true, + .reclaim_idx = MAX_NR_ZONES - 1, + .gfp_mask = GFP_KERNEL, + }; + unsigned int flags; + + set_task_reclaim_state(current, &sc.reclaim_state); + flags = memalloc_noreclaim_save(); + /* + * setting can_swap=true and force_scan=true ensures + * proper workingset stats when the system cannot swap. + */ + try_to_inc_max_seq(lruvec, max_seq, &sc, true, true); + memalloc_noreclaim_restore(flags); + set_task_reclaim_state(current, NULL); +} +#endif /* CONFIG_WORKINGSET_REPORT */ + #else /* !CONFIG_LRU_GEN */ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) diff --git a/mm/workingset_report.c b/mm/workingset_report.c new file mode 100644 index 000000000000..98cdaffcb6b4 --- /dev/null +++ b/mm/workingset_report.c @@ -0,0 +1,413 @@ +// SPDX-License-Identifier: GPL-2.0 +// +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "internal.h" + +void wsr_init(struct lruvec *lruvec) +{ + struct wsr_state *wsr = &lruvec->wsr; + + memset(wsr, 0, sizeof(*wsr)); + mutex_init(&wsr->page_age_lock); +} + +void wsr_destroy(struct lruvec *lruvec) +{ + struct wsr_state *wsr = &lruvec->wsr; + + mutex_destroy(&wsr->page_age_lock); + kfree(wsr->page_age); + memset(wsr, 0, sizeof(*wsr)); +} + +static int workingset_report_intervals_parse(char *src, + struct wsr_report_bins *bins) +{ + int err = 0, i = 0; + char *cur, *next = strim(src); + + if (*next == '\0') + return 0; + + while ((cur = strsep(&next, ","))) { + unsigned int interval; + + err = kstrtouint(cur, 0, &interval); + if (err) + goto out; + + bins->bins[i].idle_age = msecs_to_jiffies(interval); + if (i > 0 && bins->bins[i].idle_age <= bins->bins[i - 1].idle_age) { + err = -EINVAL; + goto out; + } + + if (++i == WORKINGSET_REPORT_MAX_NR_BINS) { + err = -ERANGE; + goto out; + } + } + + if (i && i < WORKINGSET_REPORT_MIN_NR_BINS - 1) { + err = -ERANGE; + goto out; + } + + bins->nr_bins = i; + bins->bins[i].idle_age = WORKINGSET_INTERVAL_MAX; +out: + return err ?: i; +} + +static unsigned long get_gen_start_time(const struct lru_gen_folio *lrugen, + unsigned long seq, + unsigned long max_seq, + unsigned long curr_timestamp) +{ + int younger_gen; + + if (seq == max_seq) + return curr_timestamp; + younger_gen = lru_gen_from_seq(seq + 1); + return READ_ONCE(lrugen->timestamps[younger_gen]); +} + +static void collect_page_age_type(const struct lru_gen_folio *lrugen, + struct wsr_report_bin *bin, + unsigned long max_seq, unsigned long min_seq, + unsigned long curr_timestamp, int type) +{ + unsigned long seq; + + for (seq = max_seq; seq + 1 > min_seq; seq--) { + int gen, zone; + unsigned long gen_end, gen_start, size = 0; + + gen = lru_gen_from_seq(seq); + + for (zone = 0; zone < MAX_NR_ZONES; zone++) + size += max( + READ_ONCE(lrugen->nr_pages[gen][type][zone]), + 0L); + + gen_start = get_gen_start_time(lrugen, seq, max_seq, + curr_timestamp); + gen_end = READ_ONCE(lrugen->timestamps[gen]); + + while (bin->idle_age != WORKINGSET_INTERVAL_MAX && + time_before(gen_end + bin->idle_age, curr_timestamp)) { + unsigned long gen_in_bin = (long)gen_start - + (long)curr_timestamp + + (long)bin->idle_age; + unsigned long gen_len = (long)gen_start - (long)gen_end; + + if (!gen_len) + break; + if (gen_in_bin) { + unsigned long split_bin = + size / gen_len * gen_in_bin; + + bin->nr_pages[type] += split_bin; + size -= split_bin; + } + gen_start = curr_timestamp - bin->idle_age; + bin++; + } + bin->nr_pages[type] += size; + } +} + +/* + * proportionally aggregate Multi-gen LRU bins into a working set report + * MGLRU generations: + * current time + * | max_seq timestamp + * | | max_seq - 1 timestamp + * | | | unbounded + * | | | | + * -------------------------------- + * | max_seq | ... | ... | min_seq + * -------------------------------- + * + * Bins: + * + * current time + * | current - idle_age[0] + * | | current - idle_age[1] + * | | | unbounded + * | | | | + * ------------------------------ + * | bin 0 | ... | ... | bin n-1 + * ------------------------------ + * + * Assume the heuristic that pages are in the MGLRU generation + * through uniform accesses, so we can aggregate them + * proportionally into bins. + */ +static void collect_page_age(struct wsr_page_age_histo *page_age, + const struct lruvec *lruvec) +{ + int type; + const struct lru_gen_folio *lrugen = &lruvec->lrugen; + unsigned long curr_timestamp = jiffies; + unsigned long max_seq = READ_ONCE((lruvec)->lrugen.max_seq); + unsigned long min_seq[ANON_AND_FILE] = { + READ_ONCE(lruvec->lrugen.min_seq[LRU_GEN_ANON]), + READ_ONCE(lruvec->lrugen.min_seq[LRU_GEN_FILE]), + }; + struct wsr_report_bins *bins = &page_age->bins; + + for (type = 0; type < ANON_AND_FILE; type++) { + struct wsr_report_bin *bin = &bins->bins[0]; + + collect_page_age_type(lrugen, bin, max_seq, min_seq[type], + curr_timestamp, type); + } +} + +/* First step: hierarchically scan child memcgs. */ +static void refresh_scan(struct wsr_state *wsr, struct mem_cgroup *root, + struct pglist_data *pgdat) +{ + struct mem_cgroup *memcg; + + memcg = mem_cgroup_iter(root, NULL, NULL); + do { + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); + + wsr_refresh_scan(lruvec); + cond_resched(); + } while ((memcg = mem_cgroup_iter(root, memcg, NULL))); +} + +/* Second step: aggregate child memcgs into the page age histogram. */ +static void refresh_aggregate(struct wsr_page_age_histo *page_age, + struct mem_cgroup *root, + struct pglist_data *pgdat) +{ + struct mem_cgroup *memcg; + struct wsr_report_bin *bin; + + /* + * page_age_intervals should free the page_age struct + * if no intervals are provided. + */ + VM_WARN_ON_ONCE(page_age->bins.bins[0].idle_age == + WORKINGSET_INTERVAL_MAX); + + for (bin = page_age->bins.bins; + bin->idle_age != WORKINGSET_INTERVAL_MAX; bin++) { + bin->nr_pages[0] = 0; + bin->nr_pages[1] = 0; + } + /* the last used bin has idle_age == WORKINGSET_INTERVAL_MAX. */ + bin->nr_pages[0] = 0; + bin->nr_pages[1] = 0; + + memcg = mem_cgroup_iter(root, NULL, NULL); + do { + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); + + collect_page_age(page_age, lruvec); + cond_resched(); + } while ((memcg = mem_cgroup_iter(root, memcg, NULL))); + WRITE_ONCE(page_age->timestamp, jiffies); +} + +bool wsr_refresh_report(struct wsr_state *wsr, struct mem_cgroup *root, + struct pglist_data *pgdat) +{ + struct wsr_page_age_histo *page_age; + + if (!READ_ONCE(wsr->page_age)) + return false; + + refresh_scan(wsr, root, pgdat); + mutex_lock(&wsr->page_age_lock); + page_age = READ_ONCE(wsr->page_age); + if (page_age) + refresh_aggregate(page_age, root, pgdat); + mutex_unlock(&wsr->page_age_lock); + return !!page_age; +} +EXPORT_SYMBOL_GPL(wsr_refresh_report); + +static struct pglist_data *kobj_to_pgdat(struct kobject *kobj) +{ + int nid = IS_ENABLED(CONFIG_NUMA) ? kobj_to_dev(kobj)->id : + first_memory_node; + + return NODE_DATA(nid); +} + +static struct wsr_state *kobj_to_wsr(struct kobject *kobj) +{ + return &mem_cgroup_lruvec(NULL, kobj_to_pgdat(kobj))->wsr; +} + +static ssize_t page_age_intervals_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + int len = 0; + struct wsr_state *wsr = kobj_to_wsr(kobj); + + mutex_lock(&wsr->page_age_lock); + + if (!!wsr->page_age) { + int i; + int nr_bins = wsr->page_age->bins.nr_bins; + + for (i = 0; i < nr_bins; ++i) { + struct wsr_report_bin *bin = + &wsr->page_age->bins.bins[i]; + + len += sysfs_emit_at(buf, len, "%u", + jiffies_to_msecs(bin->idle_age)); + if (i + 1 < nr_bins) + len += sysfs_emit_at(buf, len, ","); + } + } + len += sysfs_emit_at(buf, len, "\n"); + + mutex_unlock(&wsr->page_age_lock); + return len; +} + +static ssize_t page_age_intervals_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *src, size_t len) +{ + struct wsr_page_age_histo *page_age = NULL, *old; + char *buf = NULL; + int err = 0; + struct wsr_state *wsr = kobj_to_wsr(kobj); + + buf = kstrdup(src, GFP_KERNEL); + if (!buf) { + err = -ENOMEM; + goto failed; + } + + page_age = + kzalloc(sizeof(struct wsr_page_age_histo), GFP_KERNEL_ACCOUNT); + + if (!page_age) { + err = -ENOMEM; + goto failed; + } + + err = workingset_report_intervals_parse(buf, &page_age->bins); + if (err < 0) + goto failed; + + if (err == 0) { + kfree(page_age); + page_age = NULL; + } + + mutex_lock(&wsr->page_age_lock); + old = xchg(&wsr->page_age, page_age); + mutex_unlock(&wsr->page_age_lock); + kfree(old); + kfree(buf); + return len; +failed: + kfree(page_age); + kfree(buf); + + return err; +} + +static struct kobj_attribute page_age_intervals_attr = + __ATTR_RW(page_age_intervals); + +static ssize_t page_age_show(struct kobject *kobj, struct kobj_attribute *attr, + char *buf) +{ + struct wsr_report_bin *bin; + int ret = 0; + struct wsr_state *wsr = kobj_to_wsr(kobj); + + if (!READ_ONCE(wsr->page_age)) + return -EINVAL; + + wsr_refresh_report(wsr, NULL, kobj_to_pgdat(kobj)); + + mutex_lock(&wsr->page_age_lock); + if (!wsr->page_age) { + ret = -EINVAL; + goto unlock; + } + + for (bin = wsr->page_age->bins.bins; + bin->idle_age != WORKINGSET_INTERVAL_MAX; bin++) + ret += sysfs_emit_at(buf, ret, "%u anon=%lu file=%lu\n", + jiffies_to_msecs(bin->idle_age), + bin->nr_pages[0] * PAGE_SIZE, + bin->nr_pages[1] * PAGE_SIZE); + + ret += sysfs_emit_at(buf, ret, "%lu anon=%lu file=%lu\n", + WORKINGSET_INTERVAL_MAX, + bin->nr_pages[0] * PAGE_SIZE, + bin->nr_pages[1] * PAGE_SIZE); + +unlock: + mutex_unlock(&wsr->page_age_lock); + return ret; +} + +static struct kobj_attribute page_age_attr = __ATTR_RO(page_age); + +static struct attribute *workingset_report_attrs[] = { + &page_age_intervals_attr.attr, &page_age_attr.attr, NULL +}; + +static const struct attribute_group workingset_report_attr_group = { + .name = "workingset_report", + .attrs = workingset_report_attrs, +}; + +void wsr_register_node(struct node *node) +{ + struct kobject *kobj = node ? &node->dev.kobj : mm_kobj; + struct wsr_state *wsr; + + if (IS_ENABLED(CONFIG_NUMA) && !node) + return; + + wsr = kobj_to_wsr(kobj); + + if (sysfs_create_group(kobj, &workingset_report_attr_group)) { + pr_warn("WSR failed to created group"); + return; + } +} +EXPORT_SYMBOL_GPL(wsr_register_node); + +void wsr_unregister_node(struct node *node) +{ + struct kobject *kobj = &node->dev.kobj; + struct wsr_state *wsr; + + if (IS_ENABLED(CONFIG_NUMA) && !node) + return; + + wsr = kobj_to_wsr(kobj); + sysfs_remove_group(kobj, &workingset_report_attr_group); + wsr_destroy(mem_cgroup_lruvec(NULL, kobj_to_pgdat(kobj))); +} +EXPORT_SYMBOL_GPL(wsr_unregister_node); From patchwork Wed Mar 27 21:31:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13607487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF4E0CD1284 for ; Wed, 27 Mar 2024 21:31:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9094F6B008A; Wed, 27 Mar 2024 17:31:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8BA756B008C; Wed, 27 Mar 2024 17:31:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70CB86B0092; Wed, 27 Mar 2024 17:31:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5121F6B008A for ; Wed, 27 Mar 2024 17:31:37 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id CBD1A801E1 for ; Wed, 27 Mar 2024 21:31:36 +0000 (UTC) X-FDA: 81944115792.04.D91159E Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf11.hostedemail.com (Postfix) with ESMTP id F379A40016 for ; Wed, 27 Mar 2024 21:31:34 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=NQkUDlar; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 3NpAEZgcKCJ8XT9MBGTFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--yuanchu.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3NpAEZgcKCJ8XT9MBGTFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--yuanchu.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711575095; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CyJDixqg00pkxMZu+nlTfhoUxwjhkmzgxYkGofwS92E=; b=tIb9myFbzrFCzVeVSp4xmoHIse3E5Yo+Jopqm4k3Z7XjFY7Zq3VZ801ILIfOnaZnVbZtvG WY8lxpcbw7SMHq6FwJ49nJDAymQ90vvC0WZgZp9HyDySJe992bugrza3g6n7iBnJz0UoAG w84DpIkymCKb9dXilliabIbJWZtClxU= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=NQkUDlar; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 3NpAEZgcKCJ8XT9MBGTFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--yuanchu.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3NpAEZgcKCJ8XT9MBGTFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--yuanchu.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711575095; a=rsa-sha256; cv=none; b=xee83UKABHbW9s+cVm1mGQxWc+WF++zR2KhzeitFQW9wqYmVu/Knm9BT8Mflylx3tNwh2D CQRAe7DFfYmGIG+V9iOqkjLvBQa3UDuNf70dnj6Kt2xcGWjIwfJsyv65H8jvALqoRin3Pi t3YNEjI/O7dTn7Weu+IJeqIjgt2My9s= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-60a61b31993so5660107b3.1 for ; Wed, 27 Mar 2024 14:31:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711575094; x=1712179894; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CyJDixqg00pkxMZu+nlTfhoUxwjhkmzgxYkGofwS92E=; b=NQkUDlarx3h8QvQ2w3ETsg70nrrygtiQ5wUfgdJ6M7GYeqpJhxHKNff+T4Xhg9OJyy +dvrzOYog/EW1HvQp8bi+Mf0m1iqVDHhH2lukWHiRg7H5e+LlVFmuzRnZaGx+/JnsCbu Cp8lhTMXMsXadXqsOL7GFj7hUYSRFhzYK/VBztDrXn8dllFJU/bPe9JFfi94Wutgi8TC rXmVw0G5qjldM0F8523OEvFpDgTCKZrFe4GMM41Qji8PEnqt6s5UF3+JZvd1ju5vTwDq vP/UxcIbCcpgA6JVt+3KH6DcuT0i/Y1M/RnUBO7wCXtnxZg64o6kE+dGYZiQsasHaadV r+Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711575094; x=1712179894; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CyJDixqg00pkxMZu+nlTfhoUxwjhkmzgxYkGofwS92E=; b=o59ASd8mMA0Uyd42II35Hkrq3t3B5L391El9jV8W4+AVbVAueuriOZpBCwyqma6yvJ 6ymuTy92W9AG86cq75q+CgNZUi4iA+DbhbFKXDrelKAaCtJb1vS0g0BPMiwZPRnwNTUk WYZUWdDaG9NHXW0JCvvBdQG/3kEVHOSRYKCdFX5Z24PsqTfutH9BDeNssJnrxAOHmTB4 W8ufTRWXTC4sbmO9mdEqhDn7Ux+5ngylbl8rpZZwvxQ6jcZFj7ltYEIySrEZ4UrdQqJU 7huknnZ8BS/pgE6gJaK9Ad6I6hu+a5HM32umPv64NinAEYK3g14eVRvB/Xc6wms+EFtU gcaQ== X-Forwarded-Encrypted: i=1; AJvYcCXGkyuqoz7f4tbgu1dNHlf/TkvS3WAhEcdIKoa+ZgVH/ft1sksEzNgE0UX6qnXUGiMMljc1peoZHNIOiK3KGznwedc= X-Gm-Message-State: AOJu0YxhC5rf3V+s2hm5oSTtb9UiaBbjahTCarGA2vaVr0nTcH0It+U/ UDhoSTD6YJLBp4mAbCMu0C/yhymCWkef6/gnqqJjd9hnsUXmGOBL9ePTyLcC/RNYQ6SrZ701cjZ CRJvcXQ== X-Google-Smtp-Source: AGHT+IGSGqv5KUSJmM8bk0cJQQF7BQEKotxJoozxsxA3rktWM2RtKyv04koLrGenDZXEe0nH0BWimglXW+HF X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:6df3:ef42:a58e:a6b1]) (user=yuanchu job=sendgmr) by 2002:a0d:ca8b:0:b0:60a:e67:2ed0 with SMTP id m133-20020a0dca8b000000b0060a0e672ed0mr212398ywd.9.1711575094066; Wed, 27 Mar 2024 14:31:34 -0700 (PDT) Date: Wed, 27 Mar 2024 14:31:02 -0700 In-Reply-To: <20240327213108.2384666-1-yuanchu@google.com> Mime-Version: 1.0 References: <20240327213108.2384666-1-yuanchu@google.com> X-Mailer: git-send-email 2.44.0.396.g6e790dbe36-goog Message-ID: <20240327213108.2384666-4-yuanchu@google.com> Subject: [RFC PATCH v3 3/8] mm: use refresh interval to rate-limit workingset report aggregation From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org X-Rspamd-Queue-Id: F379A40016 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: q1b55urh13pmm1j75mwyjnh9tbfamkmx X-HE-Tag: 1711575094-219755 X-HE-Meta: U2FsdGVkX19YJisw+2/0cny/9Qv3r1uw3CU8xXXunlaLNoFn435lmyR7Zn7SzLkE6Vq6/fqLDXCMAAF0fL+Tj+JUxOOe1hlCw4cHW95pzPhizXoa/Mo9cXUliCixZ0tQGso+Ym/1HDFcr6tBAhZi7t4IEesB1GhK/AzonBuB8vb7rNPOlCht2yJParU2uhRDR3grSEXgW2km0V+5Wxc0HwX39ffbrXIALWT0S8qE3jX3jN7bdIXyE1CBMgN2T938SUKSfmsoJhYa6NFO5xHYU75xhgdLOaz3ysqgqTecG3vp8cF9Km8p1CoAYfkIX0qvOjCLaFc16RHBI24cdcaDsxiCWV1nCHhXj+/ZqYeVJKIHxdY+my+BPUKTbcx+qqulum6XALIWwsaXvXUYrrxR7AG3FlmperXyLtJdDVQdc8wsP76PEl/SNmCeuC3EpH6ymj2AiQjaYz7c/7meW7YtVUuOnajQCD52p850/fAgQkXBm3PYRgluX5tApzUIrUpB5SxBFLyVL80mFYHO0x5xCx8JakA9XDUFNfuqRkI/qbueDz0SgWdyGQiKh+eFdwZS139Crl/rbVySdObj7AUY6mq3JNrg4DD3rLbQoY1njAClkxNsS6EMTu/H31VtZZMQvulLBezjiui27reHY+dVy7xHJxIfKinV2w789kdVxwmx6MZiSyCT1Ttma/QjnPwrlt0SFJCFOcARf9kJSLoSc9K0V8yviLYbwW6PsIwN8SfxJjPW3UaAIJ8p1kYna8dYyHOujBDJConzk+ds6491IvvcCED8kKhFq7QiuyRhu3jEthq26nlSf3zY/tFNsuJno5OL6PboS6YdwmRCsaYF91czWSket8t5RUsf+4RJQkXjj5Vq0LbJL0qiDxEWcUI/3xYBCM7n6pNpWbWiTc6luIvZ1fV9wmXDBEdSsAyPVFSeQWx1U6iCkTUdYygZJLonPKC5+OeurQaM5pMayKo fvWTqmZy wGmRkGUK1/UYFN4p1sxxFxQSZ14jDsSkfOgqitjzrFLOUFkzX4WJq1kf4YdVzm73TfSMTUmYnYlJiQutdhMaWUtrg3wgKBikV75A6x8zYKxh3W+co6vOZWbfQxJTpSnhVXbCQOiudn3TixbVBJYGntp9uJIVVlP8eFmDd8IMpIi8ASgVxIVejy9Fn+Obq29ipTYl1IfYKh4Ji92sk1QpnjKN3YCTgRv8N0cF459QE4TbUPgL1kRa9JO3c6ZR0Y02YPzVRWVGgOHUq+9XnHIyWaOzuw+sQEpLMSKynNPErfiwkj5vv8PWjdoXkx1RctfWSAJ1nxoHAbbaFY9V4mtGg/dQC6Dj6VaiNiN3HS/X2knMo0IS7uLCIkta7SETRYdSTF1Rz43KjsXGwkNO0HlP4Mu4QKnnvEeWztqQdpKl/eivw8nKTk9x2IBMkSrXs+oa7vqrJidk3wD1Xc5n6jEwxwkpDq+yOx6vzekkQeCVEYtlrkb8AezkIbokS09XH51pUpGV0Nfh0S1/PXOWQOYThDNzg4eb/7n/xFZSDTww3t4Az3G+q1PV3Fn6urBg2ms8wbk//20CJD4YD7i63S93Jl/VJF1sXFbwAhUnLX1jAQD4cveYGaek+C8Gva4sRyDE5Ch3i6Hb2jIKrRNo1TB7eJfhQPcsKUNNhCeSZA3ew6RoJtgE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The refresh interval is a rate limiting factor to workingset page age histogram reads. When a workingset report is generated, a timestamp is noted, and the same report will be read until it expires beyond the refresh interval, at which point a new report is generated. Sysfs interface /sys/devices/system/node/nodeX/workingset_report/refresh_interval time in milliseconds specifying how long the report is valid for Signed-off-by: Yuanchu Xie --- include/linux/workingset_report.h | 1 + mm/internal.h | 2 +- mm/vmscan.c | 27 ++++++++------ mm/workingset_report.c | 58 ++++++++++++++++++++++++++----- 4 files changed, 69 insertions(+), 19 deletions(-) diff --git a/include/linux/workingset_report.h b/include/linux/workingset_report.h index 0de640cb1ef0..23d2ae747a31 100644 --- a/include/linux/workingset_report.h +++ b/include/linux/workingset_report.h @@ -35,6 +35,7 @@ struct wsr_page_age_histo { }; struct wsr_state { + unsigned long refresh_interval; /* breakdown of workingset by page age */ struct mutex page_age_lock; struct wsr_page_age_histo *page_age; diff --git a/mm/internal.h b/mm/internal.h index 5e0caba64ee4..151f09c6983e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -210,7 +210,7 @@ extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason * in mm/wsr.c */ /* Requires wsr->page_age_lock held */ -void wsr_refresh_scan(struct lruvec *lruvec); +void wsr_refresh_scan(struct lruvec *lruvec, unsigned long refresh_interval); #endif /* diff --git a/mm/vmscan.c b/mm/vmscan.c index b694d80ab2d1..5f04a04f5261 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -5620,7 +5620,7 @@ late_initcall(init_lru_gen); * workingset reporting ******************************************************************************/ #ifdef CONFIG_WORKINGSET_REPORT -void wsr_refresh_scan(struct lruvec *lruvec) +void wsr_refresh_scan(struct lruvec *lruvec, unsigned long refresh_interval) { DEFINE_MAX_SEQ(lruvec); struct scan_control sc = { @@ -5633,15 +5633,22 @@ void wsr_refresh_scan(struct lruvec *lruvec) }; unsigned int flags; - set_task_reclaim_state(current, &sc.reclaim_state); - flags = memalloc_noreclaim_save(); - /* - * setting can_swap=true and force_scan=true ensures - * proper workingset stats when the system cannot swap. - */ - try_to_inc_max_seq(lruvec, max_seq, &sc, true, true); - memalloc_noreclaim_restore(flags); - set_task_reclaim_state(current, NULL); + if (refresh_interval) { + int gen = lru_gen_from_seq(max_seq); + unsigned long birth = READ_ONCE(lruvec->lrugen.timestamps[gen]); + + if (time_is_before_jiffies(birth + refresh_interval)) { + set_task_reclaim_state(current, &sc.reclaim_state); + flags = memalloc_noreclaim_save(); + /* + * setting can_swap=true and force_scan=true ensures + * proper workingset stats when the system cannot swap. + */ + try_to_inc_max_seq(lruvec, max_seq, &sc, true, true); + memalloc_noreclaim_restore(flags); + set_task_reclaim_state(current, NULL); + } + } } #endif /* CONFIG_WORKINGSET_REPORT */ diff --git a/mm/workingset_report.c b/mm/workingset_report.c index 98cdaffcb6b4..370e7d355604 100644 --- a/mm/workingset_report.c +++ b/mm/workingset_report.c @@ -181,7 +181,8 @@ static void collect_page_age(struct wsr_page_age_histo *page_age, /* First step: hierarchically scan child memcgs. */ static void refresh_scan(struct wsr_state *wsr, struct mem_cgroup *root, - struct pglist_data *pgdat) + struct pglist_data *pgdat, + unsigned long refresh_interval) { struct mem_cgroup *memcg; @@ -189,7 +190,7 @@ static void refresh_scan(struct wsr_state *wsr, struct mem_cgroup *root, do { struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); - wsr_refresh_scan(lruvec); + wsr_refresh_scan(lruvec, refresh_interval); cond_resched(); } while ((memcg = mem_cgroup_iter(root, memcg, NULL))); } @@ -231,16 +232,25 @@ static void refresh_aggregate(struct wsr_page_age_histo *page_age, bool wsr_refresh_report(struct wsr_state *wsr, struct mem_cgroup *root, struct pglist_data *pgdat) { - struct wsr_page_age_histo *page_age; + struct wsr_page_age_histo *page_age = NULL; + unsigned long refresh_interval = READ_ONCE(wsr->refresh_interval); if (!READ_ONCE(wsr->page_age)) return false; - refresh_scan(wsr, root, pgdat); + if (!refresh_interval) + return false; + mutex_lock(&wsr->page_age_lock); page_age = READ_ONCE(wsr->page_age); - if (page_age) - refresh_aggregate(page_age, root, pgdat); + if (!page_age) + goto unlock; + if (time_is_after_jiffies(page_age->timestamp + refresh_interval)) + goto unlock; + refresh_scan(wsr, root, pgdat, refresh_interval); + refresh_aggregate(page_age, root, pgdat); + +unlock: mutex_unlock(&wsr->page_age_lock); return !!page_age; } @@ -259,6 +269,35 @@ static struct wsr_state *kobj_to_wsr(struct kobject *kobj) return &mem_cgroup_lruvec(NULL, kobj_to_pgdat(kobj))->wsr; } +static ssize_t refresh_interval_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + struct wsr_state *wsr = kobj_to_wsr(kobj); + unsigned int interval = READ_ONCE(wsr->refresh_interval); + + return sysfs_emit(buf, "%u\n", jiffies_to_msecs(interval)); +} + +static ssize_t refresh_interval_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t len) +{ + unsigned int interval; + int err; + struct wsr_state *wsr = kobj_to_wsr(kobj); + + err = kstrtouint(buf, 0, &interval); + if (err) + return err; + + WRITE_ONCE(wsr->refresh_interval, msecs_to_jiffies(interval)); + + return len; +} + +static struct kobj_attribute refresh_interval_attr = + __ATTR_RW(refresh_interval); + static ssize_t page_age_intervals_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { @@ -267,7 +306,7 @@ static ssize_t page_age_intervals_show(struct kobject *kobj, mutex_lock(&wsr->page_age_lock); - if (!!wsr->page_age) { + if (wsr->page_age) { int i; int nr_bins = wsr->page_age->bins.nr_bins; @@ -373,7 +412,10 @@ static ssize_t page_age_show(struct kobject *kobj, struct kobj_attribute *attr, static struct kobj_attribute page_age_attr = __ATTR_RO(page_age); static struct attribute *workingset_report_attrs[] = { - &page_age_intervals_attr.attr, &page_age_attr.attr, NULL + &refresh_interval_attr.attr, + &page_age_intervals_attr.attr, + &page_age_attr.attr, + NULL }; static const struct attribute_group workingset_report_attr_group = { From patchwork Wed Mar 27 21:31:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13607489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBF6BC54E67 for ; Wed, 27 Mar 2024 21:31:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 031506B008C; Wed, 27 Mar 2024 17:31:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EAC266B0092; Wed, 27 Mar 2024 17:31:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D4CB26B0095; Wed, 27 Mar 2024 17:31:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id B865A6B008C for ; Wed, 27 Mar 2024 17:31:38 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7C4A9403F1 for ; Wed, 27 Mar 2024 21:31:38 +0000 (UTC) X-FDA: 81944115876.06.4B1DFE1 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf16.hostedemail.com (Postfix) with ESMTP id A4CA4180012 for ; Wed, 27 Mar 2024 21:31:36 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Da1UU3X1; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3N5AEZgcKCKAYUANCHUGOOGLE.COMLINUX-MMKVACK.ORG@flex--yuanchu.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3N5AEZgcKCKAYUANCHUGOOGLE.COMLINUX-MMKVACK.ORG@flex--yuanchu.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711575096; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LB+v0i5rcgoLcT2egvPDfe+7I6p18e+cGMfRHtt3nUw=; b=3pCEinTpyjJ+5oYpZN5E1+cJSJL9f8I23z8H5TRPaoDt0o2KAiDMKC8hlr3iIpeIxXAamu gGNCgMzjCEIO1jYU/FaqlNv1OROyT5X7lAE8KiOzyioozyLGotMTHQKzryBPZzbo1h/Xpt sc6+jtQ+lDd8GC1l2OO0J3J0gfKrfBc= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Da1UU3X1; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3N5AEZgcKCKAYUANCHUGOOGLE.COMLINUX-MMKVACK.ORG@flex--yuanchu.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3N5AEZgcKCKAYUANCHUGOOGLE.COMLINUX-MMKVACK.ORG@flex--yuanchu.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711575096; a=rsa-sha256; cv=none; b=UMvtZyfYujNDMyLEXrtCsg5fDlNRb8GUWMmkzOKLdQ8nKq3sNIzzKoBeONKIqbDsv7RXw7 OPN/8OCnqN+oEST7BnzDTJ4XDzbqKT8+OWzRzHLhU+5YJPR28DPnav3xvRiGNkXjmVDYyc Pdfe+KDDPlQQGphblRdZLBO8FM/MAOU= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dbe9e13775aso414374276.1 for ; Wed, 27 Mar 2024 14:31:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711575096; x=1712179896; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LB+v0i5rcgoLcT2egvPDfe+7I6p18e+cGMfRHtt3nUw=; b=Da1UU3X1ZXg2HQa9Pu1KpcmoNY5tO5DkUPl1NzYtsFsiY0M8DCtJALCB2eeRcJ4rXR o3Hr/ra1C95s6jMbBvd/8VLFW+hTjvCm3ipW9lkzz54pGUxijsUK5PQfmpbH9YWn72vr RmMkcFKSs8fVadQ2o/lmm4sLAgpPO++3GTNx1+qM61RlH79CNLGx+Yr0yGI11WrxTiKq ymPxcofS2cPqK9/LhrqY+XEC+Y9xu0lRXcAQwhTQDCiQCE4t1C0/qQFW8FdOBlybCaNa OizKxyYlQYhn5jOyF3UYSXkkcTkFbanh3WfrU33tybSVSbuUHmLR6PcYvQmG4dFefpmq KaMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711575096; x=1712179896; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LB+v0i5rcgoLcT2egvPDfe+7I6p18e+cGMfRHtt3nUw=; b=DHev35QKlk9d1DOWPX7aDQ9AK9VGdWm9uN4vBTSGHHikVHpfyRaRCU9mVJrY8LrjAz D4F7JcOIYEyVQV3JDYQEZ19PuT4RXxykhWGapSP5dYF/pBxYgms1p5nRKUGBKPU7MBBH 0d2jn2BPYDISdAfTQpCq+GeF3DHs9sXzhAlXS1n1GygBrxVuqccPpDcvNsekJK27MP6B W7L6TBc3XW4Gv2TYlWS02bPy3JPoutJlYPKJqiNWPi+Cm/bmkkUXQws/MVLqTCG4jZ7v X/7NcZ7m/TpzIW62Nuv71NFJdpDoApTAnh+6Rn1Ox9e9mb8aj2lF02X7GFG7BYRgXKoJ wZqw== X-Forwarded-Encrypted: i=1; AJvYcCXXsxvCAjPjm/HqcEjySlDqrviUEdp6xVUsHW4iOpgJ9/ReDXO4LJJTfg0g4zH2+lJaocTqC8HereDiQnqyqwkDuIU= X-Gm-Message-State: AOJu0YyMhczYZXttt277mOIy11TSqJ/kQEv7/EttlHnO3AhpMKN9ilzr QQPpWxBOzJ0EDqvxQP9YdGzNyz3DkwtVaN1Rt2DZ+ue01gmMrSyT/R1o42rdxbfc/pJMxS/RTFT zU15S5A== X-Google-Smtp-Source: AGHT+IHMceXyiMQRNVP1E9sG9l7GvMJZx9fuMZnRf4fjgo+ewu4QirMYVr1TGSmtvsHYoxnOh9k/JVnZeT/d X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:6df3:ef42:a58e:a6b1]) (user=yuanchu job=sendgmr) by 2002:a05:6902:2183:b0:dc6:53c3:bcbd with SMTP id dl3-20020a056902218300b00dc653c3bcbdmr314870ybb.7.1711575095837; Wed, 27 Mar 2024 14:31:35 -0700 (PDT) Date: Wed, 27 Mar 2024 14:31:03 -0700 In-Reply-To: <20240327213108.2384666-1-yuanchu@google.com> Mime-Version: 1.0 References: <20240327213108.2384666-1-yuanchu@google.com> X-Mailer: git-send-email 2.44.0.396.g6e790dbe36-goog Message-ID: <20240327213108.2384666-5-yuanchu@google.com> Subject: [RFC PATCH v3 4/8] mm: report workingset during memory pressure driven scanning From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: A4CA4180012 X-Stat-Signature: zp9dxdo63at7pa6qd3aqbmnfsgx87hxb X-HE-Tag: 1711575096-304731 X-HE-Meta: U2FsdGVkX1/WQ0x0Hb807UcxqrIH2X+96SRoN8kx1IB/WZX7UlcyRV5uFJwzlAlMaC80gzjFPlDwtzgK7QQaeV8GBtRHKfiGabiP3euJxVf+zPqZgnhk6FH38ebHc8+7Gk0TYrDDs1Zez/oKA5OdVCejIfweHAOQ0jg+cucfjKvZRwmrATOrzBhc+JlnWiwH2gUo0qyQJ+tjFY2QGKAQmW5wg5sU53vfWIjVhD9S7I3HfkR4TGBeFfhU/4rrm3LAdCvzbV6TAK8MuKuiIvjhsu7CvC2rBt53k3PsdTFcOpsHRc7gBgNpkRLB/FZcXtcTlVNKqv/rCOo5dXrnm2raiY8fFhogaCQDfPCo9V79KYFtXnoRslEd7/W8B0yKHiDs9GYg4ZDUonuAC3L282ONJdJ4w39CUMeK/64lfdI9D7F+QAoCgozcanMvieeZ9J2+BHhjGtXxvfRB/4u8uC+fpYABJTDBKnlyu2PUBMAt5CUHcaLyAkTFKWI9JjpX6xkCPlmvsg1mjXrcu3etr+j9cS0gdrNYmFPq9DRUYS4onEWjW9w4VTZqTdkFgo5KY7vubFlg49QOHZ3Y2z0q3tDcjoyn9F9mirHA4ukCvx9EYK8Yiu0V5CxFDMHOIB/qJ4SO6vH5A7OjfuVxa9uzv71mjDe/ewce5ne7v8U9i/vm6fBoP7FE9Eehv+b0AxpPt8flEImFGAMMX2siWAS2DDG8dRQB5o9SVQTwV9v3Few+ooRhnNOimdW/roKTKJoEF/mc0Mpc6ieHrvl3Jhe8NoAttYnQ5c/fYvRx6XGFQNsdODJT9TffERe+onZtz2wNXY3V7FDOCrZw3RyQPngWa4AXNpXlLTszn9b7Ayn/P5cENeJa8Dz3C1FBMQ2Tvd3uWXsh9UDp1pPUW4BNCjEJteHe81NYrWEVSGC3A9oBNGdKilHBJxnGqDcnGNvmJnZNIX4Sh/U5bXl+UiwKfPKWdpY WmZbfzrc f52usUOQpGP2GbWwYp4sjYKTTIn8igN+KCaZuNuvZnONaXwxP7/eD+S/5uDCjJIauc1pdrgYebgmSYhGr15AWLPPSVqXdLjPlPEv6gDX6tLgWZ2aadNolJq0XWrWD+dmB7O0z10RIHcOzEA9juCqGOraTx6PBlg8goC4OSqRIHzkq+ojKtkpTclRVM5xFC+3i0ohl47I18Nu3JRscTwoh2rNYgkr+B3Wo6VHFjSglDd/fZ7kwDAZEScYU3fx+G8JOJ53HsGy10z2vDuFbVxBwDlA/GDEyZN6cO1AHYJChgXbO1LFPGAnGc5IBXZQ1ZiYaeMelmp39k1Umay28dJ+Onk600ElZcin0Yxa/Zsuky9s23RsQwwMqHYw/gpGSA+Gm5hPomNi1rABPrq42X1Mz2Q66sSueiIeQzqi/EQEn54PwyMYRx8RjeXklM2Yv7trXukSyW4c74dBGOsSelYibGHnKcahAy8NTBEtA+9PpWFbBEDBfJY4G8zTnQdzrdXgWqSORsxvnnAz0x9mS73mApvGJzH/QvoAisZxohSyeqYb89VhsDuTiWykRJP10gBz6gaJBZ9Ze17Z3BH0gjtrCh00wpzb2uZwTgldf+NJ9Y40BfTucxSlpqteeAsWSZ9AkZsuyBCTpWzhbnOfc+ihAx83Kjg5PaFZMfeHVbxBse0s9+NfL0IhFxkX2dw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When a node reaches its low watermarks and wakes up kswapd, notify all userspace programs waiting on the workingset page age histogram of the memory pressure, so a userspace agent can read the workingset report in time and make policy decisions, such as logging, oom-killing, or migration. Sysfs interface: /sys/devices/system/node/nodeX/workingset_report/report_threshold time in milliseconds that specifies how often the userspace agent can be notified for node memory pressure. Signed-off-by: Yuanchu Xie --- include/linux/workingset_report.h | 4 +++ mm/internal.h | 6 +++++ mm/vmscan.c | 44 +++++++++++++++++++++++++++++++ mm/workingset_report.c | 39 +++++++++++++++++++++++++++ 4 files changed, 93 insertions(+) diff --git a/include/linux/workingset_report.h b/include/linux/workingset_report.h index 23d2ae747a31..589d240d6251 100644 --- a/include/linux/workingset_report.h +++ b/include/linux/workingset_report.h @@ -35,7 +35,11 @@ struct wsr_page_age_histo { }; struct wsr_state { + unsigned long report_threshold; unsigned long refresh_interval; + + struct kernfs_node *page_age_sys_file; + /* breakdown of workingset by page age */ struct mutex page_age_lock; struct wsr_page_age_histo *page_age; diff --git a/mm/internal.h b/mm/internal.h index 151f09c6983e..36480c7ac0dd 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -209,8 +209,14 @@ extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason /* * in mm/wsr.c */ +void notify_workingset(struct mem_cgroup *memcg, struct pglist_data *pgdat); /* Requires wsr->page_age_lock held */ void wsr_refresh_scan(struct lruvec *lruvec, unsigned long refresh_interval); +#else +static inline void notify_workingset(struct mem_cgroup *memcg, + struct pglist_data *pgdat) +{ +} #endif /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 5f04a04f5261..c6acd5265b3f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2535,6 +2535,15 @@ static bool can_age_anon_pages(struct pglist_data *pgdat, return can_demote(pgdat->node_id, sc); } +#ifdef CONFIG_WORKINGSET_REPORT +static void try_to_report_workingset(struct pglist_data *pgdat, struct scan_control *sc); +#else +static inline void try_to_report_workingset(struct pglist_data *pgdat, + struct scan_control *sc) +{ +} +#endif + #ifdef CONFIG_LRU_GEN #ifdef CONFIG_LRU_GEN_ENABLED @@ -3936,6 +3945,8 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) if (!min_ttl || sc->order || sc->priority == DEF_PRIORITY) return; + try_to_report_workingset(pgdat, sc); + memcg = mem_cgroup_iter(NULL, NULL, NULL); do { struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); @@ -5650,6 +5661,36 @@ void wsr_refresh_scan(struct lruvec *lruvec, unsigned long refresh_interval) } } } + +static void try_to_report_workingset(struct pglist_data *pgdat, + struct scan_control *sc) +{ + struct mem_cgroup *memcg = sc->target_mem_cgroup; + struct wsr_state *wsr = &mem_cgroup_lruvec(memcg, pgdat)->wsr; + unsigned long threshold = READ_ONCE(wsr->report_threshold); + + if (sc->priority == DEF_PRIORITY) + return; + + if (!threshold) + return; + + if (!mutex_trylock(&wsr->page_age_lock)) + return; + + if (!wsr->page_age) { + mutex_unlock(&wsr->page_age_lock); + return; + } + + if (time_is_after_jiffies(wsr->page_age->timestamp + threshold)) { + mutex_unlock(&wsr->page_age_lock); + return; + } + + mutex_unlock(&wsr->page_age_lock); + notify_workingset(memcg, pgdat); +} #endif /* CONFIG_WORKINGSET_REPORT */ #else /* !CONFIG_LRU_GEN */ @@ -6177,6 +6218,9 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc) if (zone->zone_pgdat == last_pgdat) continue; last_pgdat = zone->zone_pgdat; + + if (!sc->proactive) + try_to_report_workingset(zone->zone_pgdat, sc); shrink_node(zone->zone_pgdat, sc); } diff --git a/mm/workingset_report.c b/mm/workingset_report.c index 370e7d355604..3ed3b0e8f8ad 100644 --- a/mm/workingset_report.c +++ b/mm/workingset_report.c @@ -269,6 +269,33 @@ static struct wsr_state *kobj_to_wsr(struct kobject *kobj) return &mem_cgroup_lruvec(NULL, kobj_to_pgdat(kobj))->wsr; } +static ssize_t report_threshold_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + struct wsr_state *wsr = kobj_to_wsr(kobj); + unsigned int threshold = READ_ONCE(wsr->report_threshold); + + return sysfs_emit(buf, "%u\n", jiffies_to_msecs(threshold)); +} + +static ssize_t report_threshold_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t len) +{ + unsigned int threshold; + struct wsr_state *wsr = kobj_to_wsr(kobj); + + if (kstrtouint(buf, 0, &threshold)) + return -EINVAL; + + WRITE_ONCE(wsr->report_threshold, msecs_to_jiffies(threshold)); + + return len; +} + +static struct kobj_attribute report_threshold_attr = + __ATTR_RW(report_threshold); + static ssize_t refresh_interval_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { @@ -412,6 +439,7 @@ static ssize_t page_age_show(struct kobject *kobj, struct kobj_attribute *attr, static struct kobj_attribute page_age_attr = __ATTR_RO(page_age); static struct attribute *workingset_report_attrs[] = { + &report_threshold_attr.attr, &refresh_interval_attr.attr, &page_age_intervals_attr.attr, &page_age_attr.attr, @@ -437,6 +465,9 @@ void wsr_register_node(struct node *node) pr_warn("WSR failed to created group"); return; } + + wsr->page_age_sys_file = + kernfs_walk_and_get(kobj->sd, "workingset_report/page_age"); } EXPORT_SYMBOL_GPL(wsr_register_node); @@ -450,6 +481,14 @@ void wsr_unregister_node(struct node *node) wsr = kobj_to_wsr(kobj); sysfs_remove_group(kobj, &workingset_report_attr_group); + kernfs_put(wsr->page_age_sys_file); wsr_destroy(mem_cgroup_lruvec(NULL, kobj_to_pgdat(kobj))); } EXPORT_SYMBOL_GPL(wsr_unregister_node); + +void notify_workingset(struct mem_cgroup *memcg, struct pglist_data *pgdat) +{ + struct wsr_state *wsr = &mem_cgroup_lruvec(memcg, pgdat)->wsr; + + kernfs_notify(wsr->page_age_sys_file); +} From patchwork Wed Mar 27 21:31:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13607490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87D90C47DD9 for ; Wed, 27 Mar 2024 21:31:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2FAC6B0095; Wed, 27 Mar 2024 17:31:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ADB2D6B0096; Wed, 27 Mar 2024 17:31:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 907B86B0098; Wed, 27 Mar 2024 17:31:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6A5CA6B0095 for ; Wed, 27 Mar 2024 17:31:40 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 3541614061B for ; Wed, 27 Mar 2024 21:31:40 +0000 (UTC) X-FDA: 81944115960.15.A586D59 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf16.hostedemail.com (Postfix) with ESMTP id 5FCD0180019 for ; Wed, 27 Mar 2024 21:31:38 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=uGv7ELNW; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3OZAEZgcKCKIaWCPEJWIQQING.EQONKPWZ-OOMXCEM.QTI@flex--yuanchu.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3OZAEZgcKCKIaWCPEJWIQQING.EQONKPWZ-OOMXCEM.QTI@flex--yuanchu.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711575098; a=rsa-sha256; cv=none; b=PZ3Z2pnk+m+rlPX+uTyuNAmXeHuvbf2xLsAVXl0gifUrTepPWgho85bddOnNff0tq+/F1u 6e09e2unhvNU11jyasikJmDOkl/8DzPFennwrhIBkpP175f+iwrJPNrKnnjdNsmyiGzuxg /yz33QUHK+ejV+knIPsm9lM7TpZ02Ac= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=uGv7ELNW; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3OZAEZgcKCKIaWCPEJWIQQING.EQONKPWZ-OOMXCEM.QTI@flex--yuanchu.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3OZAEZgcKCKIaWCPEJWIQQING.EQONKPWZ-OOMXCEM.QTI@flex--yuanchu.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711575098; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xaCDmpnOXna8KuNDUqa6ynYFsa6ZfJ9LVAHJUw2Aofg=; b=1KBBDzGtWbyNCYu9coT8D2LwIwPkwgmcpV185EBjcMtUgizpHsVAVRnkYG1xa8zdBj+UUD TH1fYPTY2ODYPB3bgH99JE3+Lj+BH4pu7ZfdTuZGBkaTIW82KYb3QXvEkB4cl1uNgGYv0v 9xLqci3z3T5a7tSPfhJSFlStlxHhwBc= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-60cd073522cso5176687b3.1 for ; Wed, 27 Mar 2024 14:31:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711575097; x=1712179897; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xaCDmpnOXna8KuNDUqa6ynYFsa6ZfJ9LVAHJUw2Aofg=; b=uGv7ELNWZaYriuILgH34uxJXd/LTsilenLxRF/OVLefSy0VBq/+URby2sKz/1WI2VF NzDxYyMxnVtzOdK4KIizPqyo7YbCJkiKg3WibdBHjzAVdjs3oZHT1yUx3MsUYNhDZWTx Vllxz0i60WdpVpJy87Xynvz094yU8d2eXmhp0S4z3gVJDtysaHSks2Md+Fk3DxeIWW3e BacR12QknqM3bWYOuL1RWGelebpPeIB+2YZacdPSdiqB3x+0UoFw7RgjSTWVm18mR2qy Sngye6aCsmhDrLmOH9Iv2xpxHoOKcsfH0GZtqDldP/yypHsHS1NeKLXQyxkD/+Ydx1T3 II6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711575097; x=1712179897; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xaCDmpnOXna8KuNDUqa6ynYFsa6ZfJ9LVAHJUw2Aofg=; b=RpYFeVNyY/72z1WfLwFZeMtcTQehz1ZrutTX9ZYSPgxPzUgCSZNeoJ3g8x0qesIVSS ake1yN3dFgstxSLXht3biMzFPqWbwhQ13dwmJFv+0uRfG2QntJYRCcKk6PHwLVDhJR57 +9Ry+3l7e/99rn+IBsjNtsim5nS6TggIi86jrD6Z+XsSZOU9kA9XEtDxKJnkyOn9ywCr IjaY2/6PwwuFg8RMHCE51qzEuNDTT+WswanEtsIWIIl0C9mcobiMJvLKQayDEisCMZz7 al1nr8Xh2d2/KzrXhIH/2Adx5r932HSDA0ju3jcvs0gq5vvUKQfVfRfperwqXR5sSYyR tyQw== X-Forwarded-Encrypted: i=1; AJvYcCU7W+D6H9V3TftTRuPmall0EZMSyE/kEZ733WSF86ADDeBjPqREXlKNkK4vK0AMPRReSbKFjAu14lYGPcOTIobw7h4= X-Gm-Message-State: AOJu0YwMjWWYpAxKBBNLrkbR7pnZw9zNOKSczNo1RpVou6OLlIfkWojm DMeWmdivA/NqoQYzkm4lKGwuXchf0tyqeWiHhavpiqt0aztHvCs6IIRNnaSBOQ+M0WGX/FVjF5p 4ufTnuw== X-Google-Smtp-Source: AGHT+IHPhGifJWkaGVpVQkyYgUvbZ7gXC3zKKNE+5jad1F4KE6vLxnBUAJz0fdIi3IF4p9BgtNJpeDc4GOj3 X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:6df3:ef42:a58e:a6b1]) (user=yuanchu job=sendgmr) by 2002:a05:690c:b9a:b0:610:dc1b:8f39 with SMTP id ck26-20020a05690c0b9a00b00610dc1b8f39mr216073ywb.0.1711575097511; Wed, 27 Mar 2024 14:31:37 -0700 (PDT) Date: Wed, 27 Mar 2024 14:31:04 -0700 In-Reply-To: <20240327213108.2384666-1-yuanchu@google.com> Mime-Version: 1.0 References: <20240327213108.2384666-1-yuanchu@google.com> X-Mailer: git-send-email 2.44.0.396.g6e790dbe36-goog Message-ID: <20240327213108.2384666-6-yuanchu@google.com> Subject: [RFC PATCH v3 5/8] mm: extend working set reporting to memcgs From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 5FCD0180019 X-Stat-Signature: iqddf5gbog49o5quax5s5z815co8h7q5 X-HE-Tag: 1711575098-568131 X-HE-Meta: U2FsdGVkX18wIMigWst5cFcMXmRu6ulz9dl61pMA406pb1i3bADzmPNmiW1YgAdlJlZsUApc1doM+NZZDC04LBUD1l+iMwVvt457Py1MLB42UDW+zESjxoaEgyvAr2qRzN5L74ZDdiwk0FRIe5hnCHadqTYOKXCeqxIFRqkogMaTjlGIbzkEnXHl29pJhVqVltxYFgWB5Fy6vHwyg2U/0YPHA/GYgVQcwQde415uPcaUnLek/3Z4AxCDUeJbpQNUJq11XhslCEnjN6ppxJvTR1ZvYyx4XRtcG9D2CcqWjKbtUij0my+zVJPlwCLdgkIKCQu45WvABsXF0ptmUIHSWtXEdOMbgpM/YYkLZDQ6hwCmI31M6kUS9W85doSTVsIgxWSYnnS09zTJ+2M1h71Ke2E75r1eJR9mU9ryzpwzvoDANDHXe6aWBeGjyU3/c4dSUTEAXb6QCc/X/fr85ZBqUUzs0Bu9wPF/Tc8mSr02ics4tor93QiCs3l+S4n0K6OvjJg/6vTPyGDR45kzpyRuJ3zA7BVq4q1tTQYlafsvDEwhQNXQetygrc+soQSIQoPOuHimPvGmHmFWYhANimSz94leqkoLZpxCGZoIBa4xnqNNIaZxDbgc0e1hjRxCI5lLy6Y1gVmSLlewoOjlhxNkQAeMQ2xt/FL7fyWl19qZbQSswhTrruPHdr8XqT3nwa+P8Im0GhRputfxRUh1xl0cX+z9My5qqmpEH4jtDeLICuYhj82/hGH95qze+SQOoMadBemhl1nQd9h57O08wgbcgS7GONi+DKgw6ZQLJouunICM+gXgnVhlIQP3y6PVMgwDff/Qi/qHLZit9zHslk6UJXrZDPB59+0ghEKFIYPtYdjM2uZCqBHYdAm1yrDG/K+GWHoRNHEUba0MCkax7HpBTea4crGNLCzBUuWt9iEZ3lfntul7B1jDiD6+VXEEncXW07GjMZhYLbuj2ljXRl3 rrQhwtcI mTXeHeF0eDIcHAQ0G1AbNBISzrD4PAEHMzuurmXiPI12kkJPx46B/c7a5/asPQsIGFsVyxbDkEc7BCqsjjabDQyM1VFKDFFF8DD1eqIjiuI8/ZoMT0Ny7ot6htMyB2BpNH9aLy/K6yjVsPYPTDu+ZF+lH+l2zaP+b3zzNsb5jim5eL9l3pvU3uMaqwq0cDudf86tZvD4zRD+f9snuMP18WScvNGioDVV/kIsl3Ysc8WLU9IaFwyRJBrb6P43QKcovYXEeCXot+1W7aTnrpWsZvNNIr5DLmlemK1V/DKGBtaZiVar/c1BCz6G1hxFD62y/iJJ/RM/yQaXI+apCRBPm0PL4UXk4JGMi5tq0fFR2bTpl+8WHj93TuMRgIXHINcGgC0DHRzbr8w6p/G6G9R53hOKkXruMvX905TAqdmVT2wK4TF4c+McBc0pdSSg64DNLsBH8RpJjRk2JKHfeASEbVwkB3fZqv6ZV2nnzSvJzsF1MtdDFRDk45YmwiAPHz/wRyquVRZtLnvF37qpooHqZNg7+aWunM4qBkAKi8JPnwwKceuf5LeILU3NSGX5YSUxreYvlhgsh/YmxNWrB1bEonTZG+cXicSajSG1oAjqyVxk8qnufHJ0OFN1wdbTnr4S0OHQbMer5jrpo81lQpPMujQWEJcweuNOwO07NksywLeU5KFbNLou/LwQCNA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Break down the system-wide working set reporting into per-memcg reports, which aggregages its children hierarchically. The per-node working set reporting histograms and refresh/report threshold files are presented as memcg files, showing a report containing all the nodes. Memcg interface: /sys/fs/cgroup/.../memory.workingset.page_age The memcg equivalent of the sysfs workingset page age histogram, breaks down the workingset of this memcg and its children into page age intervals. Each node is prefixed with a node header and a newline. Non-proactive direct reclaim on this memcg can also wake up userspace agents that are waiting on this file. e.g. N0 1000 anon=0 file=0 2000 anon=0 file=0 3000 anon=0 file=0 4000 anon=0 file=0 5000 anon=0 file=0 18446744073709551615 anon=0 file=0 /sys/fs/cgroup/.../memory.workingset.page_age_intervals Configures the intervals for the page age histogram. This file operates on a per-node basis, allowing for different intervals for each node. e.g. echo N0=1000,2000,3000,4000,5000 > memory.workingset.page_age_intervals /sys/fs/cgroup/.../memory.workingset.refresh_interval The memcg equivalent of the sysfs refresh interval. A per-node number of how much time a page age histogram is valid for, in milliseconds. e.g. echo N0=2000 > memory.workingset.refresh_interval /sys/fs/cgroup/.../memory.workingset.report_threshold The memcg equivalent of the sysfs report threshold. A per-node number of how often userspace agent waiting on the page age histogram can be woken up, in milliseconds. e.g. echo N0=1000 > memory.workingset.report_threshold Signed-off-by: Yuanchu Xie --- include/linux/memcontrol.h | 5 + include/linux/workingset_report.h | 6 +- mm/internal.h | 2 + mm/memcontrol.c | 267 +++++++++++++++++++++++++++++- mm/workingset_report.c | 10 +- 5 files changed, 286 insertions(+), 4 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 20ff87f8e001..7d7bc0928961 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -335,6 +335,11 @@ struct mem_cgroup { struct lru_gen_mm_list mm_list; #endif +#ifdef CONFIG_WORKINGSET_REPORT + /* memory.workingset.page_age file */ + struct cgroup_file workingset_page_age_file; +#endif + struct mem_cgroup_per_node *nodeinfo[]; }; diff --git a/include/linux/workingset_report.h b/include/linux/workingset_report.h index 589d240d6251..502542c812b3 100644 --- a/include/linux/workingset_report.h +++ b/include/linux/workingset_report.h @@ -9,6 +9,7 @@ struct mem_cgroup; struct pglist_data; struct node; struct lruvec; +struct cgroup_file; #ifdef CONFIG_WORKINGSET_REPORT @@ -38,7 +39,10 @@ struct wsr_state { unsigned long report_threshold; unsigned long refresh_interval; - struct kernfs_node *page_age_sys_file; + union { + struct kernfs_node *page_age_sys_file; + struct cgroup_file *page_age_cgroup_file; + }; /* breakdown of workingset by page age */ struct mutex page_age_lock; diff --git a/mm/internal.h b/mm/internal.h index 36480c7ac0dd..3730c8399ad4 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -212,6 +212,8 @@ extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason void notify_workingset(struct mem_cgroup *memcg, struct pglist_data *pgdat); /* Requires wsr->page_age_lock held */ void wsr_refresh_scan(struct lruvec *lruvec, unsigned long refresh_interval); +int workingset_report_intervals_parse(char *src, + struct wsr_report_bins *bins); #else static inline void notify_workingset(struct mem_cgroup *memcg, struct pglist_data *pgdat) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2f07141de16c..75bda5f7994d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7005,6 +7005,245 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, return nbytes; } +#ifdef CONFIG_WORKINGSET_REPORT +static int memory_ws_page_age_intervals_show(struct seq_file *m, void *v) +{ + int nid; + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + + for_each_node_state(nid, N_MEMORY) { + struct wsr_state *wsr; + struct wsr_page_age_histo *page_age; + int i, nr_bins; + + wsr = &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + mutex_lock(&wsr->page_age_lock); + page_age = wsr->page_age; + if (!page_age) + goto no_page_age; + seq_printf(m, "N%d=", nid); + nr_bins = page_age->bins.nr_bins; + for (i = 0; i < nr_bins; ++i) { + struct wsr_report_bin *bin = + &page_age->bins.bins[i]; + + seq_printf(m, "%u", jiffies_to_msecs(bin->idle_age)); + if (i + 1 < nr_bins) + seq_putc(m, ','); + } + seq_putc(m, ' '); +no_page_age: + mutex_unlock(&wsr->page_age_lock); + } + seq_putc(m, '\n'); + + return 0; +} + +static ssize_t memory_wsr_interval_parse(struct kernfs_open_file *of, char *buf, + size_t nbytes, unsigned int *nid_out, + struct wsr_report_bins *bins) +{ + char *node, *intervals; + unsigned int nid; + int err; + + buf = strstrip(buf); + intervals = buf; + node = strsep(&intervals, "="); + + if (*node != 'N') + return -EINVAL; + + err = kstrtouint(node + 1, 0, &nid); + if (err) + return err; + + if (nid >= nr_node_ids || !node_state(nid, N_MEMORY)) + return -EINVAL; + + err = workingset_report_intervals_parse(intervals, bins); + if (err < 0) + return err; + + *nid_out = nid; + return err; +} + +static ssize_t memory_ws_page_age_intervals_write(struct kernfs_open_file *of, + char *buf, size_t nbytes, + loff_t off) +{ + unsigned int nid; + int err; + struct wsr_page_age_histo *page_age = NULL, *old; + struct wsr_state *wsr; + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); + + page_age = + kzalloc(sizeof(struct wsr_page_age_histo), GFP_KERNEL_ACCOUNT); + + if (!page_age) { + err = -ENOMEM; + goto failed; + } + + err = memory_wsr_interval_parse(of, buf, nbytes, &nid, &page_age->bins); + if (err < 0) + goto failed; + + if (err == 0) { + kfree(page_age); + page_age = NULL; + } + + wsr = &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + mutex_lock(&wsr->page_age_lock); + old = xchg(&wsr->page_age, page_age); + mutex_unlock(&wsr->page_age_lock); + kfree(old); + return nbytes; +failed: + kfree(page_age); + return err; +} + +static int memory_ws_refresh_interval_show(struct seq_file *m, void *v) +{ + int nid; + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + + for_each_node_state(nid, N_MEMORY) { + struct wsr_state *wsr = + &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + + seq_printf(m, "N%d=%u ", nid, + jiffies_to_msecs(READ_ONCE(wsr->refresh_interval))); + } + seq_putc(m, '\n'); + + return 0; +} + +static ssize_t memory_wsr_threshold_parse(char *buf, size_t nbytes, + unsigned int *nid_out, + unsigned int *msecs) +{ + char *node, *threshold; + unsigned int nid; + int err; + + buf = strstrip(buf); + threshold = buf; + node = strsep(&threshold, "="); + + if (*node != 'N') + return -EINVAL; + + err = kstrtouint(node + 1, 0, &nid); + if (err) + return err; + + if (nid >= nr_node_ids || !node_state(nid, N_MEMORY)) + return -EINVAL; + + err = kstrtouint(threshold, 0, msecs); + if (err) + return err; + + *nid_out = nid; + + return nbytes; +} + +static ssize_t memory_ws_refresh_interval_write(struct kernfs_open_file *of, + char *buf, size_t nbytes, + loff_t off) +{ + unsigned int nid, msecs; + struct wsr_state *wsr; + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); + ssize_t ret = memory_wsr_threshold_parse(buf, nbytes, &nid, &msecs); + + if (ret < 0) + return ret; + + wsr = &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + WRITE_ONCE(wsr->refresh_interval, msecs_to_jiffies(msecs)); + return ret; +} + +static int memory_ws_report_threshold_show(struct seq_file *m, void *v) +{ + int nid; + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + + for_each_node_state(nid, N_MEMORY) { + struct wsr_state *wsr = + &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + + seq_printf(m, "N%d=%u ", nid, + jiffies_to_msecs(READ_ONCE(wsr->report_threshold))); + } + seq_putc(m, '\n'); + + return 0; +} + +static ssize_t memory_ws_report_threshold_write(struct kernfs_open_file *of, + char *buf, size_t nbytes, + loff_t off) +{ + unsigned int nid, msecs; + struct wsr_state *wsr; + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); + ssize_t ret = memory_wsr_threshold_parse(buf, nbytes, &nid, &msecs); + + if (ret < 0) + return ret; + + wsr = &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + WRITE_ONCE(wsr->report_threshold, msecs_to_jiffies(msecs)); + return ret; +} + +static int memory_ws_page_age_show(struct seq_file *m, void *v) +{ + int nid; + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + + for_each_node_state(nid, N_MEMORY) { + struct wsr_state *wsr = + &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + struct wsr_report_bin *bin; + + if (!READ_ONCE(wsr->page_age)) + continue; + + wsr_refresh_report(wsr, memcg, NODE_DATA(nid)); + mutex_lock(&wsr->page_age_lock); + if (!wsr->page_age) + goto unlock; + seq_printf(m, "N%d\n", nid); + for (bin = wsr->page_age->bins.bins; + bin->idle_age != WORKINGSET_INTERVAL_MAX; bin++) + seq_printf(m, "%u anon=%lu file=%lu\n", + jiffies_to_msecs(bin->idle_age), + bin->nr_pages[0] * PAGE_SIZE, + bin->nr_pages[1] * PAGE_SIZE); + + seq_printf(m, "%lu anon=%lu file=%lu\n", WORKINGSET_INTERVAL_MAX, + bin->nr_pages[0] * PAGE_SIZE, + bin->nr_pages[1] * PAGE_SIZE); + +unlock: + mutex_unlock(&wsr->page_age_lock); + } + + return 0; +} +#endif + static struct cftype memory_files[] = { { .name = "current", @@ -7073,7 +7312,33 @@ static struct cftype memory_files[] = { .flags = CFTYPE_NS_DELEGATABLE, .write = memory_reclaim, }, - { } /* terminate */ +#ifdef CONFIG_WORKINGSET_REPORT + { + .name = "workingset.page_age_intervals", + .flags = CFTYPE_NOT_ON_ROOT | CFTYPE_NS_DELEGATABLE, + .seq_show = memory_ws_page_age_intervals_show, + .write = memory_ws_page_age_intervals_write, + }, + { + .name = "workingset.refresh_interval", + .flags = CFTYPE_NOT_ON_ROOT | CFTYPE_NS_DELEGATABLE, + .seq_show = memory_ws_refresh_interval_show, + .write = memory_ws_refresh_interval_write, + }, + { + .name = "workingset.report_threshold", + .flags = CFTYPE_NOT_ON_ROOT | CFTYPE_NS_DELEGATABLE, + .seq_show = memory_ws_report_threshold_show, + .write = memory_ws_report_threshold_write, + }, + { + .name = "workingset.page_age", + .flags = CFTYPE_NOT_ON_ROOT | CFTYPE_NS_DELEGATABLE, + .file_offset = offsetof(struct mem_cgroup, workingset_page_age_file), + .seq_show = memory_ws_page_age_show, + }, +#endif + {} /* terminate */ }; struct cgroup_subsys memory_cgrp_subsys = { diff --git a/mm/workingset_report.c b/mm/workingset_report.c index 3ed3b0e8f8ad..b00ffbfebcab 100644 --- a/mm/workingset_report.c +++ b/mm/workingset_report.c @@ -20,9 +20,12 @@ void wsr_init(struct lruvec *lruvec) { struct wsr_state *wsr = &lruvec->wsr; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); memset(wsr, 0, sizeof(*wsr)); mutex_init(&wsr->page_age_lock); + if (memcg && !mem_cgroup_is_root(memcg)) + wsr->page_age_cgroup_file = &memcg->workingset_page_age_file; } void wsr_destroy(struct lruvec *lruvec) @@ -34,7 +37,7 @@ void wsr_destroy(struct lruvec *lruvec) memset(wsr, 0, sizeof(*wsr)); } -static int workingset_report_intervals_parse(char *src, +int workingset_report_intervals_parse(char *src, struct wsr_report_bins *bins) { int err = 0, i = 0; @@ -490,5 +493,8 @@ void notify_workingset(struct mem_cgroup *memcg, struct pglist_data *pgdat) { struct wsr_state *wsr = &mem_cgroup_lruvec(memcg, pgdat)->wsr; - kernfs_notify(wsr->page_age_sys_file); + if (mem_cgroup_is_root(memcg)) + kernfs_notify(wsr->page_age_sys_file); + else + cgroup_file_notify(wsr->page_age_cgroup_file); } From patchwork Wed Mar 27 21:31:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13607491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A9F8C47DD9 for ; Wed, 27 Mar 2024 21:31:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BA4A96B0096; Wed, 27 Mar 2024 17:31:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B535B6B0098; Wed, 27 Mar 2024 17:31:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 909426B0099; Wed, 27 Mar 2024 17:31:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 6C1666B0096 for ; Wed, 27 Mar 2024 17:31:42 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id F1C4A1403F9 for ; Wed, 27 Mar 2024 21:31:41 +0000 (UTC) X-FDA: 81944116002.10.3D4711A Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf02.hostedemail.com (Postfix) with ESMTP id 2F5888000A for ; Wed, 27 Mar 2024 21:31:40 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="Zk8nR/vl"; spf=pass (imf02.hostedemail.com: domain of 3O5AEZgcKCKQcYERGLYKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--yuanchu.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3O5AEZgcKCKQcYERGLYKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--yuanchu.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711575100; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pYcWZ+uvnfpBRrJgdHxDGPK9xqtHO7VyC+05VAboBBI=; b=6AkR9+RI5DoV2psuTFJr4RVVQJTJO7RUHPfCotq5QVkgJzAnC+0A0ZV3PyfXhJ8aPb1k0J +ONkjfbxeXYv8P+sY9AyVIvf+icUnnxxnqgBVkf3AoBHi+lilq/qQ1GqLz0eozSiAktG2M AH9XxSo4urZAWOALb5Y1ITMq6uws0+A= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711575100; a=rsa-sha256; cv=none; b=pmVkeRIHqN4rbxjUruNLRtQE3GEk6gXtayt/iHBapjwpYMtlSmuejIwzw6raBAohqY8Eb/ +8GCF3WP7R6P3pHOMkEd9T1iGTpZqly0KYubOuBrTEw1l/O1mCienFgBwZYUSPFWnCvDUd AWbtbuFwZCusLP8AJ7vu/IREc65/bv4= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="Zk8nR/vl"; spf=pass (imf02.hostedemail.com: domain of 3O5AEZgcKCKQcYERGLYKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--yuanchu.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3O5AEZgcKCKQcYERGLYKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--yuanchu.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-613f563def5so5031367b3.1 for ; Wed, 27 Mar 2024 14:31:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711575099; x=1712179899; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pYcWZ+uvnfpBRrJgdHxDGPK9xqtHO7VyC+05VAboBBI=; b=Zk8nR/vlkfkDYcTauK0Cgvon1uONQ8C75d2sR51jZ0ANx6mbhJzes7O4ckPmHsSpKT GC6892VJ4yRzGu3Oka82Zev6r7TOdE86LEyVclPOca40QmypcIFpt9QmULiEJ0A0GwDI 4eR5hedAS6ARW4wE14rG5YpbSXcLYkbftrKWZXX2C+xzgEi0JnyKo+EBnFflEB4DJG6h uFJXgdPeo6El63gunDDUoTOjQpjyQGEjLDD6pibj28nJtB/j98ENfxcxtMRsbcBrwUQn 9PmymI5u7eHcgCI4USUf8Bzl5YHpbkn00Ea78dNaWKXXUL65Rzn6XqJGM7PEJoEpOY8A uBlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711575099; x=1712179899; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pYcWZ+uvnfpBRrJgdHxDGPK9xqtHO7VyC+05VAboBBI=; b=ZrbjUnvJxvjylc0MiG7xxDBTXCmgJFtFNobkscSks9NUlef1erZASKstzgoG/dY7AT Ay/iS1YHgg3XZLe2L1AVhfsLnjlu7427yciSAY5qjITnmINEJ+dR2kYHjdXUYsd1fS4n hWclhJPmBL1w7h4k1i3VCH1zqByoJiLvuL/0rxWnSxmqaLfjuXjfbHBHEAcBYhpK+EEI SvRqlRB5r+6vkL7XrvMPCKRohwfaGfXXAhZIStd6jIMukzcyXoKmqWMPiZNWuDLCLElI VlT2ZRZ7y5n1dGkFgRht3Swe64F22xXQOW3H9j3S1eSgX78Lcjp7On5LVmSldGJnpLG+ 6l8Q== X-Forwarded-Encrypted: i=1; AJvYcCUTULhGRbTIsu4pO+3mjHuU3MOltGoYDzXjuxD1rZ/ZBhwDk4K6a725/9gHznOMqe7j4ePMwAG0qfUuRxx3IFimoYI= X-Gm-Message-State: AOJu0YxcuhI6JdYcDjcG8wEe4d9e4+ND77V3J2jsxXfUWeKhqL8/B+P+ FjrgWVopDpcJWbzYmA/lqqvfufMC3/YhsM+aOqHaR60Eyulrojgvcymhsms4o2KloLPg34ywZLU hDatyhQ== X-Google-Smtp-Source: AGHT+IH29n/bXDk4V5f6G2KmwU71qUf9qBBJl3gumyEah4UzaOtBiCsSyfr3pIgcLollPnQRaMOx5Jjvfzl5 X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:6df3:ef42:a58e:a6b1]) (user=yuanchu job=sendgmr) by 2002:a05:6902:2088:b0:dc6:e20f:80cb with SMTP id di8-20020a056902208800b00dc6e20f80cbmr89815ybb.3.1711575099385; Wed, 27 Mar 2024 14:31:39 -0700 (PDT) Date: Wed, 27 Mar 2024 14:31:05 -0700 In-Reply-To: <20240327213108.2384666-1-yuanchu@google.com> Mime-Version: 1.0 References: <20240327213108.2384666-1-yuanchu@google.com> X-Mailer: git-send-email 2.44.0.396.g6e790dbe36-goog Message-ID: <20240327213108.2384666-7-yuanchu@google.com> Subject: [RFC PATCH v3 6/8] mm: add per-memcg reaccess histogram From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org X-Rspamd-Queue-Id: 2F5888000A X-Rspam-User: X-Stat-Signature: mzr6n4yh5omyus9d6q8nn38yw7ax998c X-Rspamd-Server: rspam03 X-HE-Tag: 1711575100-307000 X-HE-Meta: U2FsdGVkX1/LTVWmu8Y8Q5rtYmou1V6cbFvoJR5uabuAvTRt6Ygg8+URkUzE8LORPfKn2kvNJWA0unQfdM5rFQisDuPg84sgG12L8J+NJxUaO/9yLinuG/evBke3hsogFuxYM6QF5XO8lL4zcV387EHFyMz6hrj8x+pNJEmF4epYevC9gaUCRD42D/KSn/BT/kSqv/5bL9ZAlNCZW2d1RmspDFFDVRk44JryFFVGTsSKB9V+X6UUuS4nsOT0oMhl60+WI8xNU9+ZEduVtMj0OgiUdnSta+Gbv1bnU3N0zEZbnmaNDPNOhUGYofxzZmaFB9yEgOS77+S3+MS/zO57JwcJn33OtO9skKJueU1E9K0j9SdCqUIGu4NUXlfEW/s1Ljq7z3mPcRqhFVov0S1d7kefzmIeBd9j3PeSMv/hjfkdOLzDipzRXH4MSIfT0HWZgVoU4shEW6EiVEs+T3gYwQ4/IJr3onRan6PlgaJspjJ6MSwlwEcBvyodwSV6MIG0xWHTwhPFSoZkmkghWPE3QvnRzQ6l54CpB793yOpX/2pMEb9sVL+Lj0M/ea4mX4Z2+EvXG2tjRRKEE3HsVNP2pg6j3YEciAly5NKkUAoXEa9+dSv8Zjw5lZjxrCyVXzG2YPnSNEFdLCX+aAj/0N5WCiMWaF2rJJlroeB7EKP6UFlg8sWvqEgy9MueUAFROsR4YW3Q9D5wZ0oK5iwVzbv06OYbx9OCeD4c//s2lJu61vTRnmw1m6AhSyocm4F1Z9drZcJvA0IwgJmbhQiaouXFbHSFA7O51vhfNvdPaa2jIL2s6qFu4EP4pAEOmg0KLHNzf1BNT0itODMLr4pzeimzymU8hpCHroeo+3KdzraD57SdZ5cNGgoeL6wsUF0sTqU5iitsvX6NJzEjTcbJY5c6ZboRCCX7tWzWBivTfBv6Qtev0a1bfI19WHb3dpR+cj4nrXjRZAROioHk/eThLSS uLadhuDV +Swj2qslZAHqRy6Y5Nvaoq44CiUvMm1UObv4gT+WbOQZ37x1KYrLdNso5Zf/u+k3Jeode7/GpnN6qhvXFC74RLWeatmaoeGB/2t2YFKvCGpyLhNX0spx8vQVB8ESj8B8H3YB0mL56PE+Cv1+qlX5/beV58VwvBQeiCdQ4sDZuEbdVYrkeWGwcY7pOCpRq6VzUNpbskGAeBC8DlWoAbuSh5hxJMCB/8LoO7dsx66uX67ZH5x0m6u5oq6Zl31HG83TI6FBQw+uTC1jkcUzklp2dEZ4f3DnIvNDk796gGd6oAD2gth/WS2Md9biezhbtgf7hGp+2hj+DskUFzeHW8VC1ytVpOKwsAZkJvGbk6qYDfI7YD6QRbHKaA3u9uo/VGjFBPbdO59+Uwe50DdQsn31gwHYZ5091C/iDxDhyMBKdlHh9O+6m8ssMo0kawAJxwHunWyls/NSAKeYkRTO7lTR52LXi+WMDHwoXGYPv/mBS4omcwmjuA1XskoYojxXNVFhwp8dPjE+O6vLLoj6JamMpAWAIxpnv4JtbmacUCFXSJ/kxzUVI5Wd+vTgKMDnUpGwdDQ1vFPYyHLHs6C3TWZz6QdJ2INwnvWNmQP1yuKATUU3a43DGErFE5AeRxNS5NZJzGrCLiCW3fp26LkeYSiz2Tlx9WWiQtmHoX8o2lVVKWDyOoSELOdwUYFhLMQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A reaccess refers to detecting an access on a page via refault or access bit harvesting after the initial access. Similar to the working set histogram, the reaccess histogram breaks down reaccesses into user-defined bins. It tracks reaccesses from MGLRU walks, where a move from older generations to the young generation counts as a reaccess. Swapped out pages are tracked with the generation number encoded in mm/workingset.c, and additional tracking is added for enabled memory cgroups to track an additional 4 swapped out generations. Memcg interfaces: /sys/fs/cgroup/.../memory.workingset.reaccess The format is identical to memory.workingset.page_age, but the content breaks down reaccesses into pre-defined intervals. e.g. N0 1000 anon=6330 file=0 2000 anon=72 file=0 4000 anon=0 file=0 18446744073709551615 anon=0 file=0 N1 18446744073709551615 anon=0 file=0 /sys/fs/cgroup/.../memory.workingset.reaccess_intervals Defines the per-node intervals for memory.workingset.reaccess. e.g. echo N0=120000,240000,480000 > memory.workingset.reaccess_intervals Signed-off-by: Yuanchu Xie --- include/linux/workingset_report.h | 20 +++ mm/internal.h | 28 ++++ mm/memcontrol.c | 112 ++++++++++++++ mm/vmscan.c | 8 +- mm/workingset.c | 9 +- mm/workingset_report.c | 249 ++++++++++++++++++++++++++++++ 6 files changed, 419 insertions(+), 7 deletions(-) diff --git a/include/linux/workingset_report.h b/include/linux/workingset_report.h index 502542c812b3..e908c5678b1e 100644 --- a/include/linux/workingset_report.h +++ b/include/linux/workingset_report.h @@ -4,6 +4,7 @@ #include #include +#include struct mem_cgroup; struct pglist_data; @@ -19,6 +20,12 @@ struct cgroup_file; #define WORKINGSET_INTERVAL_MAX ((unsigned long)-1) #define ANON_AND_FILE 2 +/* + * MAX_NR_EVICTED_GENS is set to 4 so we can track the same number of + * generations as MGLRU has resident. + */ +#define MAX_NR_EVICTED_GENS 4 + struct wsr_report_bin { unsigned long idle_age; unsigned long nr_pages[ANON_AND_FILE]; @@ -35,6 +42,18 @@ struct wsr_page_age_histo { struct wsr_report_bins bins; }; +struct wsr_evicted_gen { + unsigned long timestamp; + int seq; +}; + +struct wsr_reaccess_histo { + struct rcu_head rcu; + /* evicted gens start from min_seq[LRU_GEN_ANON] - 1 */ + struct wsr_evicted_gen gens[MAX_NR_EVICTED_GENS]; + struct wsr_report_bins bins; +}; + struct wsr_state { unsigned long report_threshold; unsigned long refresh_interval; @@ -47,6 +66,7 @@ struct wsr_state { /* breakdown of workingset by page age */ struct mutex page_age_lock; struct wsr_page_age_histo *page_age; + struct wsr_reaccess_histo __rcu *reaccess; }; void wsr_init(struct lruvec *lruvec); diff --git a/mm/internal.h b/mm/internal.h index 3730c8399ad4..077340b526e8 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -205,16 +205,44 @@ void putback_lru_page(struct page *page); void folio_putback_lru(struct folio *folio); extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason); +/* + * in mm/workingset.c + */ +#define WORKINGSET_SHIFT 1 +#define EVICTION_SHIFT ((BITS_PER_LONG - BITS_PER_XA_VALUE) + \ + WORKINGSET_SHIFT + NODES_SHIFT + \ + MEM_CGROUP_ID_SHIFT) +#define EVICTION_MASK (~0UL >> EVICTION_SHIFT) + #ifdef CONFIG_WORKINGSET_REPORT /* * in mm/wsr.c */ +void report_lru_gen_eviction(struct lruvec *lruvec, int type, int min_seq); +void lru_gen_report_reaccess(struct lruvec *lruvec, + struct lru_gen_mm_walk *walk); +void report_reaccess_refault(struct lruvec *lruvec, unsigned long token, + int type, int nr_pages); void notify_workingset(struct mem_cgroup *memcg, struct pglist_data *pgdat); /* Requires wsr->page_age_lock held */ void wsr_refresh_scan(struct lruvec *lruvec, unsigned long refresh_interval); int workingset_report_intervals_parse(char *src, struct wsr_report_bins *bins); #else +struct lru_gen_mm_walk; +static inline void report_lru_gen_eviction(struct lruvec *lruvec, int type, + int min_seq) +{ +} +static inline void lru_gen_report_reaccess(struct lruvec *lruvec, + struct lru_gen_mm_walk *walk) +{ +} +static inline void report_reaccess_refault(struct lruvec *lruvec, + unsigned long token, int type, + int nr_pages) +{ +} static inline void notify_workingset(struct mem_cgroup *memcg, struct pglist_data *pgdat) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 75bda5f7994d..2a39a4445bb7 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7108,6 +7108,71 @@ static ssize_t memory_ws_page_age_intervals_write(struct kernfs_open_file *of, return err; } +static int memory_ws_reaccess_intervals_show(struct seq_file *m, void *v) +{ + int nid; + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + + for_each_node_state(nid, N_MEMORY) { + struct wsr_state *wsr; + struct wsr_reaccess_histo *reaccess; + int i, nr_bins; + + wsr = &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + rcu_read_lock(); + reaccess = rcu_dereference(wsr->reaccess); + if (!reaccess) + goto unlock; + seq_printf(m, "N%d=", nid); + nr_bins = reaccess->bins.nr_bins; + for (i = 0; i < nr_bins; ++i) { + struct wsr_report_bin *bin = &reaccess->bins.bins[i]; + + seq_printf(m, "%u", jiffies_to_msecs(bin->idle_age)); + if (i + 1 < nr_bins) + seq_putc(m, ','); + } + seq_putc(m, ' '); +unlock: + rcu_read_unlock(); + } + seq_putc(m, '\n'); + + return 0; +} + +static ssize_t memory_ws_reaccess_intervals_write(struct kernfs_open_file *of, + char *buf, size_t nbytes, + loff_t off) +{ + unsigned int nid; + int err; + struct wsr_state *wsr; + struct wsr_reaccess_histo *reaccess = NULL, *old; + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); + + reaccess = kzalloc(sizeof(struct wsr_reaccess_histo), GFP_KERNEL); + if (!reaccess) + return -ENOMEM; + + err = memory_wsr_interval_parse(of, buf, nbytes, &nid, &reaccess->bins); + if (err < 0) + goto failed; + + if (err == 0) { + kfree(reaccess); + reaccess = NULL; + } + + wsr = &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + old = xchg(&wsr->reaccess, reaccess); + kfree_rcu(old, rcu); + return nbytes; +failed: + kfree(reaccess); + return err; +} + static int memory_ws_refresh_interval_show(struct seq_file *m, void *v) { int nid; @@ -7242,6 +7307,42 @@ static int memory_ws_page_age_show(struct seq_file *m, void *v) return 0; } + +static int memory_ws_reaccess_histogram_show(struct seq_file *m, void *v) +{ + int nid; + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + + for_each_node_state(nid, N_MEMORY) { + struct wsr_state *wsr = + &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + struct wsr_reaccess_histo *reaccess; + struct wsr_report_bin *bin; + + rcu_read_lock(); + reaccess = rcu_dereference(wsr->reaccess); + + if (!reaccess) + goto unlock; + + wsr_refresh_report(wsr, memcg, NODE_DATA(nid)); + + seq_printf(m, "N%d\n", nid); + for (bin = reaccess->bins.bins; + bin->idle_age != WORKINGSET_INTERVAL_MAX; bin++) + seq_printf(m, "%u anon=%lu file=%lu\n", + jiffies_to_msecs(bin->idle_age), + bin->nr_pages[0], bin->nr_pages[1]); + + seq_printf(m, "%lu anon=%lu file=%lu\n", WORKINGSET_INTERVAL_MAX, + bin->nr_pages[0], bin->nr_pages[1]); + +unlock: + rcu_read_unlock(); + } + + return 0; +} #endif static struct cftype memory_files[] = { @@ -7337,6 +7438,17 @@ static struct cftype memory_files[] = { .file_offset = offsetof(struct mem_cgroup, workingset_page_age_file), .seq_show = memory_ws_page_age_show, }, + { + .name = "workingset.reaccess_intervals", + .flags = CFTYPE_NOT_ON_ROOT | CFTYPE_NS_DELEGATABLE, + .seq_show = memory_ws_reaccess_intervals_show, + .write = memory_ws_reaccess_intervals_write, + }, + { + .name = "workingset.reaccess", + .flags = CFTYPE_NOT_ON_ROOT | CFTYPE_NS_DELEGATABLE, + .seq_show = memory_ws_reaccess_histogram_show, + }, #endif {} /* terminate */ }; diff --git a/mm/vmscan.c b/mm/vmscan.c index c6acd5265b3f..4d9245e2c0d1 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3637,6 +3637,7 @@ static void walk_mm(struct lruvec *lruvec, struct mm_struct *mm, struct lru_gen_ mem_cgroup_unlock_pages(); if (walk->batched) { + lru_gen_report_reaccess(lruvec, walk); spin_lock_irq(&lruvec->lru_lock); reset_batch_size(lruvec, walk); spin_unlock_irq(&lruvec->lru_lock); @@ -3709,6 +3710,7 @@ static bool inc_min_seq(struct lruvec *lruvec, int type, bool can_swap) } done: reset_ctrl_pos(lruvec, type, true); + report_lru_gen_eviction(lruvec, type, lrugen->min_seq[type] + 1); WRITE_ONCE(lrugen->min_seq[type], lrugen->min_seq[type] + 1); return true; @@ -3750,6 +3752,7 @@ static bool try_to_inc_min_seq(struct lruvec *lruvec, bool can_swap) continue; reset_ctrl_pos(lruvec, type, true); + report_lru_gen_eviction(lruvec, type, min_seq[type]); WRITE_ONCE(lrugen->min_seq[type], min_seq[type]); success = true; } @@ -4565,11 +4568,14 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap sc->nr_scanned -= folio_nr_pages(folio); } + walk = current->reclaim_state->mm_walk; + if (walk && walk->batched) + lru_gen_report_reaccess(lruvec, walk); + spin_lock_irq(&lruvec->lru_lock); move_folios_to_lru(lruvec, &list); - walk = current->reclaim_state->mm_walk; if (walk && walk->batched) reset_batch_size(lruvec, walk); diff --git a/mm/workingset.c b/mm/workingset.c index 226012974328..057fbedd91ea 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -17,6 +17,8 @@ #include #include +#include "internal.h" + /* * Double CLOCK lists * @@ -179,12 +181,6 @@ * refault distance will immediately activate the refaulting page. */ -#define WORKINGSET_SHIFT 1 -#define EVICTION_SHIFT ((BITS_PER_LONG - BITS_PER_XA_VALUE) + \ - WORKINGSET_SHIFT + NODES_SHIFT + \ - MEM_CGROUP_ID_SHIFT) -#define EVICTION_MASK (~0UL >> EVICTION_SHIFT) - /* * Eviction timestamps need to be able to cover the full range of * actionable refaults. However, bits are tight in the xarray @@ -294,6 +290,7 @@ static void lru_gen_refault(struct folio *folio, void *shadow) goto unlock; mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + type, delta); + report_reaccess_refault(lruvec, token, type, delta); if (!recent) goto unlock; diff --git a/mm/workingset_report.c b/mm/workingset_report.c index b00ffbfebcab..504d840bbe6a 100644 --- a/mm/workingset_report.c +++ b/mm/workingset_report.c @@ -34,6 +34,7 @@ void wsr_destroy(struct lruvec *lruvec) mutex_destroy(&wsr->page_age_lock); kfree(wsr->page_age); + kfree_rcu(wsr->reaccess, rcu); memset(wsr, 0, sizeof(*wsr)); } @@ -259,6 +260,254 @@ bool wsr_refresh_report(struct wsr_state *wsr, struct mem_cgroup *root, } EXPORT_SYMBOL_GPL(wsr_refresh_report); +static void lru_gen_collect_reaccess_refault(struct wsr_report_bins *bins, + unsigned long timestamp, int type, + int nr_pages) +{ + unsigned long curr_timestamp = jiffies; + struct wsr_report_bin *bin = &bins->bins[0]; + + while (bin->idle_age != WORKINGSET_INTERVAL_MAX && + time_before(timestamp + bin->idle_age, curr_timestamp)) + bin++; + + bin->nr_pages[type] += nr_pages; +} + +static void collect_reaccess_type(struct lru_gen_mm_walk *walk, + const struct lru_gen_folio *lrugen, + struct wsr_report_bin *bin, + unsigned long max_seq, unsigned long min_seq, + unsigned long curr_timestamp, int type) +{ + unsigned long seq; + + /* Skip max_seq because a reaccess moves a page from another seq + * to max_seq. We use the negative change in page count from + * other seqs to track the number of reaccesses. + */ + for (seq = max_seq - 1; seq + 1 > min_seq; seq--) { + int younger_gen, gen, zone; + unsigned long gen_end, gen_start; + long delta = 0; + + gen = lru_gen_from_seq(seq); + + for (zone = 0; zone < MAX_NR_ZONES; zone++) { + long nr_pages = walk->nr_pages[gen][type][zone]; + + if (nr_pages < 0) + delta += -nr_pages; + } + + gen_end = READ_ONCE(lrugen->timestamps[gen]); + younger_gen = lru_gen_from_seq(seq + 1); + gen_start = READ_ONCE(lrugen->timestamps[younger_gen]); + + /* ensure gen_start is within idle_age of bin */ + while (bin->idle_age != WORKINGSET_INTERVAL_MAX && + time_before(gen_start + bin->idle_age, curr_timestamp)) + bin++; + + while (bin->idle_age != WORKINGSET_INTERVAL_MAX && + time_before(gen_end + bin->idle_age, curr_timestamp)) { + unsigned long proportion = (long)gen_start - + (long)curr_timestamp + + (long)bin->idle_age; + unsigned long gen_len = (long)gen_start - (long)gen_end; + + if (!gen_len) + break; + if (proportion) { + unsigned long split_bin = + delta / gen_len * proportion; + bin->nr_pages[type] += split_bin; + delta -= split_bin; + } + gen_start = curr_timestamp - bin->idle_age; + bin++; + } + bin->nr_pages[type] += delta; + } +} + +/* + * Reaccesses are propagated up the memcg hierarchy during scanning/refault. + * Collect the reaccess information from a multi-gen LRU walk. + */ +static void lru_gen_collect_reaccess(struct wsr_report_bins *bins, + struct lru_gen_folio *lrugen, + struct lru_gen_mm_walk *walk) +{ + int type; + unsigned long curr_timestamp = jiffies; + unsigned long max_seq = READ_ONCE(walk->max_seq); + unsigned long min_seq[ANON_AND_FILE] = { + READ_ONCE(lrugen->min_seq[LRU_GEN_ANON]), + READ_ONCE(lrugen->min_seq[LRU_GEN_FILE]), + }; + + for (type = 0; type < ANON_AND_FILE; type++) { + struct wsr_report_bin *bin = &bins->bins[0]; + + collect_reaccess_type(walk, lrugen, bin, max_seq, + min_seq[type], curr_timestamp, type); + } +} + +void lru_gen_report_reaccess(struct lruvec *lruvec, struct lru_gen_mm_walk *walk) +{ + struct lru_gen_folio *lrugen = &lruvec->lrugen; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + + for (memcg = lruvec_memcg(lruvec); memcg; + memcg = parent_mem_cgroup(memcg)) { + struct lruvec *memcg_lruvec = + mem_cgroup_lruvec(memcg, lruvec_pgdat(lruvec)); + struct wsr_state *wsr = &memcg_lruvec->wsr; + struct wsr_reaccess_histo *reaccess; + + rcu_read_lock(); + reaccess = rcu_dereference(wsr->reaccess); + if (!reaccess) { + rcu_read_unlock(); + continue; + } + lru_gen_collect_reaccess(&reaccess->bins, lrugen, walk); + rcu_read_unlock(); + } +} + +static inline int evicted_gen_from_seq(unsigned long seq) +{ + return seq % MAX_NR_EVICTED_GENS; +} + +void report_lru_gen_eviction(struct lruvec *lruvec, int type, int min_seq) +{ + int seq; + struct wsr_reaccess_histo *reaccess = NULL; + struct lru_gen_folio *lrugen = &lruvec->lrugen; + struct wsr_state *wsr = &lruvec->wsr; + + /* + * Since file can go ahead of anon, min_seq[file] >= min_seq[anon] + * only record evictions when anon moves forward. + */ + if (type != LRU_GEN_ANON) + return; + + /* + * lru_lock is held during eviction, so reaccess accounting + * can be serialized. + */ + lockdep_assert_held(&lruvec->lru_lock); + + rcu_read_lock(); + reaccess = rcu_dereference(wsr->reaccess); + if (!reaccess) + goto unlock; + + for (seq = READ_ONCE(lrugen->min_seq[LRU_GEN_ANON]); seq < min_seq; + ++seq) { + int evicted_gen = evicted_gen_from_seq(seq); + int gen = lru_gen_from_seq(seq); + + WRITE_ONCE(reaccess->gens[evicted_gen].seq, seq); + WRITE_ONCE(reaccess->gens[evicted_gen].timestamp, + READ_ONCE(lrugen->timestamps[gen])); + } + +unlock: + rcu_read_unlock(); +} + +/* + * May yield an incorrect timestamp if the token collides with + * a recently evicted generation. + */ +static int timestamp_from_workingset_token(struct lruvec *lruvec, + unsigned long token, + unsigned long *timestamp) +{ + int type, err = -EEXIST; + unsigned long seq, evicted_min_seq; + struct wsr_reaccess_histo *reaccess = NULL; + struct lru_gen_folio *lrugen = &lruvec->lrugen; + struct wsr_state *wsr = &lruvec->wsr; + unsigned long min_seq[ANON_AND_FILE] = { + READ_ONCE(lrugen->min_seq[LRU_GEN_ANON]), + READ_ONCE(lrugen->min_seq[LRU_GEN_FILE]) + }; + + token >>= LRU_REFS_WIDTH; + + /* recent eviction */ + for (type = 0; type < ANON_AND_FILE; ++type) { + if (token == + (min_seq[type] & (EVICTION_MASK >> LRU_REFS_WIDTH))) { + int gen = lru_gen_from_seq(min_seq[type]); + + *timestamp = READ_ONCE(lrugen->timestamps[gen]); + return 0; + } + } + + rcu_read_lock(); + reaccess = rcu_dereference(wsr->reaccess); + if (!reaccess) + goto unlock; + + /* look up in evicted gen buffer */ + evicted_min_seq = min_seq[LRU_GEN_ANON] - MAX_NR_EVICTED_GENS; + if (min_seq[LRU_GEN_ANON] < MAX_NR_EVICTED_GENS) + evicted_min_seq = 0; + for (seq = min_seq[LRU_GEN_ANON]; seq > evicted_min_seq; --seq) { + int gen = evicted_gen_from_seq(seq - 1); + + if (token == (reaccess->gens[gen].seq & + (EVICTION_MASK >> LRU_REFS_WIDTH))) { + *timestamp = reaccess->gens[gen].timestamp; + + goto unlock; + } + } + +unlock: + rcu_read_unlock(); + return err; +} + +void report_reaccess_refault(struct lruvec *lruvec, unsigned long token, + int type, int nr_pages) +{ + unsigned long timestamp; + int err; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + + err = timestamp_from_workingset_token(lruvec, token, ×tamp); + if (err) + return; + + for (memcg = lruvec_memcg(lruvec); memcg; + memcg = parent_mem_cgroup(memcg)) { + struct lruvec *memcg_lruvec = + mem_cgroup_lruvec(memcg, lruvec_pgdat(lruvec)); + struct wsr_state *wsr = &memcg_lruvec->wsr; + struct wsr_reaccess_histo *reaccess = NULL; + + rcu_read_lock(); + reaccess = rcu_dereference(wsr->reaccess); + if (!reaccess) { + rcu_read_unlock(); + continue; + } + lru_gen_collect_reaccess_refault(&reaccess->bins, timestamp, + type, nr_pages); + rcu_read_unlock(); + } +} + static struct pglist_data *kobj_to_pgdat(struct kobject *kobj) { int nid = IS_ENABLED(CONFIG_NUMA) ? kobj_to_dev(kobj)->id : From patchwork Wed Mar 27 21:31:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13607492 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEBA7CD11DD for ; Wed, 27 Mar 2024 21:31:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A19CA6B0099; Wed, 27 Mar 2024 17:31:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 97B776B009A; Wed, 27 Mar 2024 17:31:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70D586B009B; Wed, 27 Mar 2024 17:31:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 4ECBB6B0099 for ; Wed, 27 Mar 2024 17:31:44 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 16A8C140593 for ; Wed, 27 Mar 2024 21:31:44 +0000 (UTC) X-FDA: 81944116128.10.3D2918B Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf02.hostedemail.com (Postfix) with ESMTP id 347A18000A for ; Wed, 27 Mar 2024 21:31:41 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=UawUHdMB; spf=pass (imf02.hostedemail.com: domain of 3PZAEZgcKCKYeaGTINaMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--yuanchu.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3PZAEZgcKCKYeaGTINaMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--yuanchu.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711575102; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=A565PQouZoECoC+bAbMpMjHjbmuhcXoGplNwxvsLI/Q=; b=nAYcW8mfl2QlhZheKoMNcSDULW1M8EfZGyJyPe2EhOjk2t0UA8/Mzhs3k98Vw22Wob+WaT pC3e3wKbbdIE1ChUsVLs8rC76y+mih3+r1z7o9aano/Tp8GaLSNCYc0/XZ/k3ipV5xR/gc 1Jb5GEzIQojwrfiz7gEI44GfMakHVww= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=UawUHdMB; spf=pass (imf02.hostedemail.com: domain of 3PZAEZgcKCKYeaGTINaMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--yuanchu.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3PZAEZgcKCKYeaGTINaMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--yuanchu.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711575102; a=rsa-sha256; cv=none; b=D7vRHE1wvBLT28+nq2a87pnqUIeBcRUlHeZVyfvcXTp7cOE1IUpLfyPMGKYLJaW+db5tQx NcZDdvYXspRICUJ6+HJi20RxPZF7tHq7woGl8MtLvdbwxdmkZL3yX6dR20DaIOXtZ1xiv6 jYEp1b/qyNWG0m/KsSu/fksC51Ql7NY= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-60a0b18e52dso4213687b3.1 for ; Wed, 27 Mar 2024 14:31:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711575101; x=1712179901; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=A565PQouZoECoC+bAbMpMjHjbmuhcXoGplNwxvsLI/Q=; b=UawUHdMBudKagJOL1/T3M5gLUqDVf7YriF02EM9JAzeqPgZK4e4KyyIduhVMFIglNb QD42AvR1hRrfPMWHD9U56+UzfHNwKgyeM4l4mpbiyoxfzUBizClh4QmRYr1GWLa+83rz iMLaL2W1jJ8+KqOnfIm7rjx0VG+0Rti0xy32wd74uzJ2KWODJEg8k5nC8wy9aLbUshac BWDTdXBBkw7YgUThfGYWcsab7vjUYiUML+VLZhXCSwMLm4Ovkb6qLsm2y0Xd+0teZol3 3USj1t9BBLvlQgl5ISBbTTGkoZvry1oHjUyEhlGKL8Jl3V0wQsoNYk/H7BWdgWTgVzw3 ov0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711575101; x=1712179901; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=A565PQouZoECoC+bAbMpMjHjbmuhcXoGplNwxvsLI/Q=; b=Jy4an/73Giiqk/8TPxkWaqeTptMRVxwXzwjs8i4oenbyyfasA88DOWw0GYKsbM+/7J C4yZhj4d1X3MnFCg8HGWudQYjrAIPEDO85GKnhi6tmhZ7EkUQfiibqTHE0Bq36oYhuF7 2B//fGw2sa8ePmm8c7Sd1sWp4iy+NQID87xxbmxNfw+DC3WzdEV2hFkrDRk8DRoM4RVY pyVUep83eNTFHZ/3E/QxOcBdAq/fIUeof/iUtL4AqvLjrwwOO+uvBfV24c9S7yfTOPq7 VDl71wuJT5mdjG6501m+TNh/CogdGQ7XXqpQ092Y9lmPBE/RbyFxENAZvbKkJhIfTKH0 rKUw== X-Forwarded-Encrypted: i=1; AJvYcCVtK5EAQ4jCFO1MXN3G2Z87vxQnBZigKMBBtuVD9uKJttiliWJVGx9BjlwOoxn7CNrngEtvI0vn3sG68J1y36q1qI0= X-Gm-Message-State: AOJu0YzFNDv3mnplK88rVa9Ax8b1YkeXuaw3SVeam8BsBh+qBXd6yv0C qx3QiCq4qTKGbCmOrEvq0bPaqALqLb4SqPeGB59TQhOjiaeYINco/ldMC7QjzzNXE5s0EWIjwQU pVd+MLw== X-Google-Smtp-Source: AGHT+IGWjQUNRXXoGUh5+QqHFNkK8JcuM0kAdyJ/QDj3Y482hU2DxaC1k/t+Vf+seKF4W2aB0l9ObuFk2jZg X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:6df3:ef42:a58e:a6b1]) (user=yuanchu job=sendgmr) by 2002:a0d:df0d:0:b0:611:3077:2de7 with SMTP id i13-20020a0ddf0d000000b0061130772de7mr101184ywe.3.1711575101306; Wed, 27 Mar 2024 14:31:41 -0700 (PDT) Date: Wed, 27 Mar 2024 14:31:06 -0700 In-Reply-To: <20240327213108.2384666-1-yuanchu@google.com> Mime-Version: 1.0 References: <20240327213108.2384666-1-yuanchu@google.com> X-Mailer: git-send-email 2.44.0.396.g6e790dbe36-goog Message-ID: <20240327213108.2384666-8-yuanchu@google.com> Subject: [RFC PATCH v3 7/8] mm: add kernel aging thread for workingset reporting From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org X-Rspamd-Queue-Id: 347A18000A X-Rspam-User: X-Stat-Signature: fwjq4639wei9oddr6png1xuni5qis6e5 X-Rspamd-Server: rspam01 X-HE-Tag: 1711575101-689709 X-HE-Meta: U2FsdGVkX1/n2e4BwziBMfhkL+0OwRIaSjEZ1RnLtzS3yIT9S3VVmAY7rONbbbBklmaDCLrcrFFxGVWVa9r4eW6CWNgepLkw4KSL39NqHxeq7RutTt2npslZMy/asUovlpavr0D9DxE3NkRkYY4ZppKgGgR/YrfMWIWDcS2fYE+ayUUKBw9fDk+tkUmLo63D4ZVGbKFRAKrdt1VuQw5qEDrjqDd/47HHtyIGyA1HEBRngDR6Tbh3MGon8zQflSdYGqNBuVIfieVS0FuqfHGjQrieos/PMuqItgU3fttw3zT8saOaZs2LnyEEGV32hrGQr9cQcu46rFs6hPrfmZPSQKkN5RMGXg2PdwjChdbz2mvqt8KkGjGEmi+gUc3t9VQSFH48B8ChTk4cGo/E8yGg66yOeB4dh9Moh8c3v/hyP/xA03t1tKW67eWhNkXR3WtWkLZTDdPrXPRFS2Yhkprvwlgcj1y9fV5erbYGrZWjdxDGJIOBhyAJPvK6Q8YBN9svnRJTXLAafLYm4YYE7t1nXlPs/O85FLRz97c8AHZit+bY/aGK/zMoVCYFGP6fsWugMnC/qSi9WwuiPgvduHagg8D59xBEBaGCAQUXXynJj6qfQKz+UwJz6Z+o3MwEg4DMKpH3HvcDVDehpoN8Lbns+A9v0+xMzKaRufd3/2B57WbRN3kccuSU6IpqjPFfpB6I9rC38cVhpmiQMw14Wm1Ynd0P1BakvgzCCLUXrXMNsuOlitVIeWlAwia07HB8DYHdf0IdCNUwBMJ5SR5n+9Ds30QzpfOsDhTS5VnaLy7ie9mwvvgL5MSx9AXry/vnY6gHIOZhuNstIOFXo6hfjnPqiXYvo2EMCNgqZ8Pvp2hgEQKKfwn0UkqnOJ5wtzkJVTXOYG9AEkTm6TbGF4uqiR2DjeMZdaqBep9ZfutKTr8j+6MdJTqww0ROA3QOOKTj9xe9B5DvfCXEptO8GvTXawU 6pm5CVGW fNu4Rj90JSd0Ecue/hio3ADAY4B2VZR8tfEum24/nQeniNVl4lXJibbsMqs94ESw/cqmpe7D/m1bNRIFg8gtvbDxfZlxrx59IYdy00Z9T+mA8YWviCxpKAhIUINZs10EYjze+E7Hb8M/vLKOFZHVdovQRtLeH31YEtHhiFrcy1nXZBhfxE7UcpdEdjNJQUHbZ9//rvFTTL7zOckcPZwmp5b5hsmlx0RPEW7OkAoUKTZiQrPXy3mOaFLwTYROQjK9e+Pe8NM9B0eTaPCCNFHBZWeiVjCGge4cagkGFxBFH7Hf9GSpYjQzlolvkhHVTjQ4TKWjFDoR7yNoVRXr+6H4YQbFvZKTyRy7ROCase6uppPREs80xEUi3gjmvxNegEdZIMj2kIeUdsdV7J0HEuOr6zaNeUe03rUn9XrzgyB21Q+c4kt9CuQ2R8bQklQlx0FylK1nE4KfMNhqw4bUm9R/YtlGNZqwdZLAkEt2UCRAp9eDIVGG4R3JJMGkGiM9q4H24sqoMD+2QPE4/QURe1od1HSFlm8UDlHGeGqTyUSgv3YU696D7JKSpQYlOP0EWba/wFbcaBilJ8mylyPWzwtzJoDX1z8V/OLM/9O8EoqAp8zCZDbk77xAnW4zeTAoqzZNxrx+b3d0kQkub+/n8h9ux+bWlW1iDxoKA97ioxC+R2LUhxZJt6/t1bPFLGg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: For reliable and timely aging on memcgs, one has to read the page age histograms on time. A kernel thread makes it easier by aging memcgs with valid page_age_intervals and refresh_interval when they can be refreshed, and also reduces the latency of any userspace consumers of the page age histogram. The kernel aging thread is gated behind CONFIG_WORKINGSET_REPORT_AGING. Debugging stats may be added in the future for when aging cannot keep up with the configured refresh_interval. Signed-off-by: Yuanchu Xie --- include/linux/workingset_report.h | 11 ++- mm/Kconfig | 6 ++ mm/Makefile | 1 + mm/memcontrol.c | 11 ++- mm/workingset_report.c | 14 +++- mm/workingset_report_aging.c | 127 ++++++++++++++++++++++++++++++ 6 files changed, 163 insertions(+), 7 deletions(-) create mode 100644 mm/workingset_report_aging.c diff --git a/include/linux/workingset_report.h b/include/linux/workingset_report.h index e908c5678b1e..759486a3a285 100644 --- a/include/linux/workingset_report.h +++ b/include/linux/workingset_report.h @@ -77,9 +77,18 @@ void wsr_destroy(struct lruvec *lruvec); * The next refresh time is stored in refresh_time. */ bool wsr_refresh_report(struct wsr_state *wsr, struct mem_cgroup *root, - struct pglist_data *pgdat); + struct pglist_data *pgdat, unsigned long *refresh_time); void wsr_register_node(struct node *node); void wsr_unregister_node(struct node *node); + +#ifdef CONFIG_WORKINGSET_REPORT_AGING +void wsr_wakeup_aging_thread(void); +#else /* CONFIG_WORKINGSET_REPORT_AGING */ +static inline void wsr_wakeup_aging_thread(void) +{ +} +#endif /* CONFIG_WORKINGSET_REPORT_AGING */ + #else static inline void wsr_init(struct lruvec *lruvec) { diff --git a/mm/Kconfig b/mm/Kconfig index 212f203b10b9..1e6aa1bd63f2 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1270,6 +1270,12 @@ config WORKINGSET_REPORT This option exports stats and events giving the user more insight into its memory working set. +config WORKINGSET_REPORT_AGING + bool "Workingset report kernel aging thread" + depends on WORKINGSET_REPORT + help + Performs aging on memcgs with their configured refresh intervals. + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index 57093657030d..7caae7f2d6cf 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -93,6 +93,7 @@ obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += huge_memory.o khugepaged.o obj-$(CONFIG_PAGE_COUNTER) += page_counter.o obj-$(CONFIG_MEMCG) += memcontrol.o vmpressure.o obj-$(CONFIG_WORKINGSET_REPORT) += workingset_report.o +obj-$(CONFIG_WORKINGSET_REPORT_AGING) += workingset_report_aging.o ifdef CONFIG_SWAP obj-$(CONFIG_MEMCG) += swap_cgroup.o endif diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2a39a4445bb7..86e15b9fc8e2 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7102,6 +7102,8 @@ static ssize_t memory_ws_page_age_intervals_write(struct kernfs_open_file *of, old = xchg(&wsr->page_age, page_age); mutex_unlock(&wsr->page_age_lock); kfree(old); + if (err && READ_ONCE(wsr->refresh_interval)) + wsr_wakeup_aging_thread(); return nbytes; failed: kfree(page_age); @@ -7227,14 +7229,17 @@ static ssize_t memory_ws_refresh_interval_write(struct kernfs_open_file *of, { unsigned int nid, msecs; struct wsr_state *wsr; + unsigned long old_interval; struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); ssize_t ret = memory_wsr_threshold_parse(buf, nbytes, &nid, &msecs); if (ret < 0) return ret; - wsr = &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + old_interval = jiffies_to_msecs(READ_ONCE(wsr->refresh_interval)); WRITE_ONCE(wsr->refresh_interval, msecs_to_jiffies(msecs)); + if (msecs && (!old_interval || jiffies_to_msecs(old_interval) > msecs)) + wsr_wakeup_aging_thread(); return ret; } @@ -7285,7 +7290,7 @@ static int memory_ws_page_age_show(struct seq_file *m, void *v) if (!READ_ONCE(wsr->page_age)) continue; - wsr_refresh_report(wsr, memcg, NODE_DATA(nid)); + wsr_refresh_report(wsr, memcg, NODE_DATA(nid), NULL); mutex_lock(&wsr->page_age_lock); if (!wsr->page_age) goto unlock; @@ -7325,7 +7330,7 @@ static int memory_ws_reaccess_histogram_show(struct seq_file *m, void *v) if (!reaccess) goto unlock; - wsr_refresh_report(wsr, memcg, NODE_DATA(nid)); + wsr_refresh_report(wsr, memcg, NODE_DATA(nid), NULL); seq_printf(m, "N%d\n", nid); for (bin = reaccess->bins.bins; diff --git a/mm/workingset_report.c b/mm/workingset_report.c index 504d840bbe6a..da658967eac2 100644 --- a/mm/workingset_report.c +++ b/mm/workingset_report.c @@ -234,7 +234,7 @@ static void refresh_aggregate(struct wsr_page_age_histo *page_age, } bool wsr_refresh_report(struct wsr_state *wsr, struct mem_cgroup *root, - struct pglist_data *pgdat) + struct pglist_data *pgdat, unsigned long *refresh_time) { struct wsr_page_age_histo *page_age = NULL; unsigned long refresh_interval = READ_ONCE(wsr->refresh_interval); @@ -253,6 +253,8 @@ bool wsr_refresh_report(struct wsr_state *wsr, struct mem_cgroup *root, goto unlock; refresh_scan(wsr, root, pgdat, refresh_interval); refresh_aggregate(page_age, root, pgdat); + if (refresh_time) + *refresh_time = page_age->timestamp + refresh_interval; unlock: mutex_unlock(&wsr->page_age_lock); @@ -564,12 +566,16 @@ static ssize_t refresh_interval_store(struct kobject *kobj, unsigned int interval; int err; struct wsr_state *wsr = kobj_to_wsr(kobj); + unsigned long old_interval; err = kstrtouint(buf, 0, &interval); if (err) return err; - WRITE_ONCE(wsr->refresh_interval, msecs_to_jiffies(interval)); + old_interval = xchg(&wsr->refresh_interval, msecs_to_jiffies(interval)); + if (interval && + (!old_interval || jiffies_to_msecs(old_interval) > interval)) + wsr_wakeup_aging_thread(); return len; } @@ -642,6 +648,8 @@ static ssize_t page_age_intervals_store(struct kobject *kobj, mutex_unlock(&wsr->page_age_lock); kfree(old); kfree(buf); + if (err && READ_ONCE(wsr->refresh_interval)) + wsr_wakeup_aging_thread(); return len; failed: kfree(page_age); @@ -663,7 +671,7 @@ static ssize_t page_age_show(struct kobject *kobj, struct kobj_attribute *attr, if (!READ_ONCE(wsr->page_age)) return -EINVAL; - wsr_refresh_report(wsr, NULL, kobj_to_pgdat(kobj)); + wsr_refresh_report(wsr, NULL, kobj_to_pgdat(kobj), NULL); mutex_lock(&wsr->page_age_lock); if (!wsr->page_age) { diff --git a/mm/workingset_report_aging.c b/mm/workingset_report_aging.c new file mode 100644 index 000000000000..91ad5020778a --- /dev/null +++ b/mm/workingset_report_aging.c @@ -0,0 +1,127 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Workingset report kernel aging thread + * + * Performs aging on behalf of memcgs with their configured refresh interval. + * While a userspace program can periodically read the page age breakdown + * per-memcg and trigger aging, the kernel performing aging is less overhead, + * more consistent, and more reliable for the use case where every memcg should + * be aged according to their refresh interval. + */ +#define pr_fmt(fmt) "workingset report aging: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static DECLARE_WAIT_QUEUE_HEAD(aging_wait); +static bool refresh_pending; + +static bool do_aging_node(int nid, unsigned long *next_wake_time) +{ + struct mem_cgroup *memcg; + bool should_wait = true; + struct pglist_data *pgdat = NODE_DATA(nid); + + memcg = mem_cgroup_iter(NULL, NULL, NULL); + do { + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); + struct wsr_state *wsr = &lruvec->wsr; + unsigned long refresh_time; + + /* use returned time to decide when to wake up next */ + if (wsr_refresh_report(wsr, memcg, pgdat, &refresh_time)) { + if (should_wait) { + should_wait = false; + *next_wake_time = refresh_time; + } else if (time_before(refresh_time, *next_wake_time)) { + *next_wake_time = refresh_time; + } + } + + cond_resched(); + } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL))); + + return should_wait; +} + +static int do_aging(void *unused) +{ + while (!kthread_should_stop()) { + int nid; + long timeout_ticks; + unsigned long next_wake_time; + bool should_wait = true; + + WRITE_ONCE(refresh_pending, false); + for_each_node_state(nid, N_MEMORY) { + unsigned long node_next_wake_time; + + if (do_aging_node(nid, &node_next_wake_time)) + continue; + if (should_wait) { + should_wait = false; + next_wake_time = node_next_wake_time; + } else if (time_before(node_next_wake_time, + next_wake_time)) { + next_wake_time = node_next_wake_time; + } + } + + if (should_wait) { + wait_event_interruptible(aging_wait, refresh_pending); + continue; + } + + /* sleep until next aging */ + timeout_ticks = next_wake_time - jiffies; + if (timeout_ticks > 0 && + timeout_ticks != MAX_SCHEDULE_TIMEOUT) { + schedule_timeout_idle(timeout_ticks); + continue; + } + } + return 0; +} + +/* Invoked when refresh_interval shortens or changes to a non-zero value. */ +void wsr_wakeup_aging_thread(void) +{ + WRITE_ONCE(refresh_pending, true); + wake_up_interruptible(&aging_wait); +} + +static struct task_struct *aging_thread; + +static int aging_init(void) +{ + struct task_struct *task; + + task = kthread_run(do_aging, NULL, "kagingd"); + + if (IS_ERR(task)) { + pr_err("Failed to create aging kthread\n"); + return PTR_ERR(task); + } + + aging_thread = task; + pr_info("module loaded\n"); + return 0; +} + +static void aging_exit(void) +{ + kthread_stop(aging_thread); + aging_thread = NULL; + pr_info("module unloaded\n"); +} + +module_init(aging_init); +module_exit(aging_exit); From patchwork Wed Mar 27 21:31:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13607493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5BC6C54E67 for ; Wed, 27 Mar 2024 21:31:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E1A66B009B; Wed, 27 Mar 2024 17:31:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8918D6B009C; Wed, 27 Mar 2024 17:31:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BE6F6B009D; Wed, 27 Mar 2024 17:31:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 46EB76B009B for ; Wed, 27 Mar 2024 17:31:46 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0BC52804A4 for ; Wed, 27 Mar 2024 21:31:46 +0000 (UTC) X-FDA: 81944116212.05.E928F2F Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf16.hostedemail.com (Postfix) with ESMTP id 56328180012 for ; Wed, 27 Mar 2024 21:31:44 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=iz47Jol9; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3P5AEZgcKCKggcIVKPcOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuanchu.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3P5AEZgcKCKggcIVKPcOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuanchu.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711575104; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=O7c7k4V/S7G6QBja8bewx6p+TNglNHZEZpbTAgR+I/w=; b=U/q/Wx2BxQZyeKM3usfRPU00uNpahKd4LxZYfUUSifTTTrrKdj4kVGk3aC7jqa8d6WlJWf QyTn1RqxTlGiJW6BL172uPpOI2dFjxj/lUYr1pnjVSmoaM8oVhz3tSrxVbgKp9JBeq3DCw gri+LXFtkhCS6xviFYun7oQeYxzmbz0= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=iz47Jol9; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3P5AEZgcKCKggcIVKPcOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuanchu.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3P5AEZgcKCKggcIVKPcOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuanchu.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711575104; a=rsa-sha256; cv=none; b=nSonlzwsZ0i0Ka74x8jGyT6YTeGo1ILTgMYU56WJgDZ9qu/LlgPjdtOArpuyKEdaojKiLB JFOs1heeAFP08w8uS9kZTYF/bUQQnJb1aqMtPtRTZLD2Z+LkYLeS15gr/kVTSacQGfhM2q lDrucftTJGtksWDOhmYc0oNqWXUnZic= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dd933a044baso1755433276.0 for ; Wed, 27 Mar 2024 14:31:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711575103; x=1712179903; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=O7c7k4V/S7G6QBja8bewx6p+TNglNHZEZpbTAgR+I/w=; b=iz47Jol99vnegF63HG7oKYx8/2mY+Wn+boWhdvtPY+2S4Ac2B7nJw2QArePREXK76v wJEg87P+pR4gY39g+tTK3YOxjkO+z8qMKQF8Ud1uIA/FZBPyujDKfIn0BC14yIIsBPpa JZ99xCaZDP40zUya1xX1hDMu3My2sf0pSX5v15W0yDSRN67IYKE6IbVwMZ6qYNcs5/Am wuLRsF2THOlk22nqX0AXFlUlgAFMxZhKPxFLiBKVbWuf4HMrn95zy05aSSVcwjAgpPYK 3JDKkCBovmtzhOeatTvTTXDLuRQrNySCjPZyxDGUHo+MNvV9hGTt5YD2RKC+Yk0VLmMo f9Yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711575103; x=1712179903; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=O7c7k4V/S7G6QBja8bewx6p+TNglNHZEZpbTAgR+I/w=; b=I6q4O0+QmFbhqbo2F2s6V7cZ13+O+e89woyoyK7frjr982qIQBIwhmsQm0d6Y19xf1 Z8e5BY8bInwaXV+EYYZ4OssyHZ56Pd/j1nNexb2DBWTCRJrq8UngeFEM2l9DM6i21LY+ p6eAtEJvnB9r9amltw54DeyfVYGtdmZqhBT58uzh9jLUTlsjooXjlGpbltZ3bvQyYa6W rWOI7mvNKSwYQumgepl5jSOoAxjFITgR4NRP6HhswY5ZhHWU1lW/fN7bF38M6ph58naJ cb/p8InSNjXPqzGiq57pv2Osa6I9ZPm0tshF8mYHSy/r3mFioxqZlbdB0AHSeqVrtMlB pQSA== X-Forwarded-Encrypted: i=1; AJvYcCWcmZYMffIVa0f5L+R/oNieOI6/+yDFVZO5Q8Z+Fq7GFJAzxFPGtaugkYgs9W2a0mYEsNLxyyClWbjdlMZcIapgld8= X-Gm-Message-State: AOJu0Yy9TXmaQA/BcWdagrnDJNMXmoJVQtoB9bIxyaG6EyqKZRCLCBw0 FB0wOSHlp/1u8TWQRcC2QZW5nCRStwdCFm22VSJ+hFveJKUr3KSDHzC7561Ev/7u0cVxF+1R1wU gXqAYdQ== X-Google-Smtp-Source: AGHT+IG08I1h3s4iLQpGTmc8i+JOVJgv8ta59pUthTIq/uAtBVC/Cxu1sn92i5SwYJx/4S1FLyw2LapJLN5i X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:6df3:ef42:a58e:a6b1]) (user=yuanchu job=sendgmr) by 2002:a05:6902:1507:b0:dbd:ee44:8908 with SMTP id q7-20020a056902150700b00dbdee448908mr154820ybu.0.1711575103440; Wed, 27 Mar 2024 14:31:43 -0700 (PDT) Date: Wed, 27 Mar 2024 14:31:07 -0700 In-Reply-To: <20240327213108.2384666-1-yuanchu@google.com> Mime-Version: 1.0 References: <20240327213108.2384666-1-yuanchu@google.com> X-Mailer: git-send-email 2.44.0.396.g6e790dbe36-goog Message-ID: <20240327213108.2384666-9-yuanchu@google.com> Subject: [RFC PATCH v3 8/8] mm: test system-wide workingset reporting From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 56328180012 X-Stat-Signature: bdsu63ji5i3a566bxahw1azon56iwkyx X-HE-Tag: 1711575104-354444 X-HE-Meta: U2FsdGVkX18+FNqK2xXZ1FyPW9Mn7uhtwFfSH2xzhDhnF1qag8sBuo7EeGb0Nr0L7o7ortB+gHrhi7PfVyvCF1fQyQVx5t4YsYLBBDiv3S5+fpxVeYURYKaf8Q4O17LPEbZk5GrRgBhlackmFPzUp92BjgbJS7gMSK5GjvKcIdrB2nWHPwXukzwQ3GcrDnueBqACUOxifZa4WMeag8CY5UpbbuysPc/1FAX1J9Kp/FeECgaSvsFzIsjbeiY7/R+eDkaM3VrfFNjxS/L/dxOjE62jLz++vsi8PwbZ0FHZB+e4jH3Cqncn01xBolyWNpn/F/WhkRJUKj3fiqRaLvF4CoKwUHB1+KzOddqOFMaR3DIcozRYgJUFy7H+M7sIlUrLhzP3y9FQ6L7uWs2TLs4vQIMnExRQ/IuduF2oKbEgwJi8OtveAKlk+8E26BAP6IFqtkrkRttLlkUWIdJgJkxIcjTVUaA7IylSgAoNeLlX0HfhGJabzgKb78q6Yb5TdixfeWQkbm/FmXbsr6A4H3haBf8E6O0x+EJE2Z7Kh7Edy6Z82AsmRs42dPbMjSwMCUATEgcI4YuLWD4m+83RQSnbUQJDmLJvTWFqzN5tyaRW2ami+HPb29wGCydMMfjxK8SKHEy3qkxSUvUyTMDGuGG+c4sUoy92fYX+wkD/Ceysy4GN6oNT6WUh9mHAUxC5hhmoKvZYjxmWHHD3OOSC8KkPqXNxxXrRAyilr6lC27mi5FMIXOj3FuwFOUk2gnfas4kzRx4Fe8nFpLmwA8hFz5MhV4/RtCiyWStADbY9Gu4fZofwMbTslfxB8Y4y+Pzzbky9pOFqNRh4sQZdBsTgrPClw+0lcNnRG7BDepOOv9cFUERud8w432aYwERF1Yk5b8YWpnkyXLgDkdb6u/HNTYLk9AR4v/6Pu/LHAoOKoCHN7iSBEGxIlWw6bfIv0akL1Z0SApV2UDsdU0D59huwdyC kI4vHdNt 12ezqOU0H5bMbWGWY2qmOsabqsuN6NNIoAx3W8uFAT2JIqkWQx9g2sBjq2pLXj0aQ3OFMHn1K2bBd6JbNMXzSdn1rM90bcWKMLL6oEBPUbSsqLV1/snRzcTlCRpczQqD7ROLyKkkIuCGmVT4vt+Yl7ACUASJ/mxC8DTJsEuiokxNHOn/TwGdWJV9yFghmznvvt5XP5Sk53TsnhmG7hlotYxiwa3uIfLowJGsEgmM5b6zaKqWm4g1PvPAZjYx26icnKlyeZ0x0pMkMe68RX58sHaRnTC5wgCAjoHb4bGlq+HuZO0zFhdMXB+gkz0t9SHXYQuM6SRTjDcaYbs7SSEFt9ZhbLUsc4evgmcV2iwefDdh8P+gtkmp8IqVOh2P85B/wfrh/uM8vJpYuvQ3Qq4BHsrXVpp8T8UXjKDeBjrwCqdfX+Vc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A basic test that verifies the working set size of a simple memory accessor. It should work with or without the aging thread. Question: I don't know how to best test file memory in selftests. Is there a place where I should put the temporary file? /tmp can be tmpfs mounted in many distros. Signed-off-by: Yuanchu Xie --- tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 3 + .../testing/selftests/mm/workingset_report.c | 315 +++++++++++++++++ .../testing/selftests/mm/workingset_report.h | 37 ++ .../selftests/mm/workingset_report_test.c | 328 ++++++++++++++++++ 5 files changed, 684 insertions(+) create mode 100644 tools/testing/selftests/mm/workingset_report.c create mode 100644 tools/testing/selftests/mm/workingset_report.h create mode 100644 tools/testing/selftests/mm/workingset_report_test.c diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index 4ff10ea61461..14a2412c8257 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -46,3 +46,4 @@ gup_longterm mkdirty va_high_addr_switch hugetlb_fault_after_madv +workingset_report_test diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index 2453add65d12..c0869bf07e99 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -70,6 +70,7 @@ TEST_GEN_FILES += ksm_tests TEST_GEN_FILES += ksm_functional_tests TEST_GEN_FILES += mdwe_test TEST_GEN_FILES += hugetlb_fault_after_madv +TEST_GEN_FILES += workingset_report_test ifneq ($(ARCH),arm64) TEST_GEN_FILES += soft-dirty @@ -123,6 +124,8 @@ $(TEST_GEN_FILES): vm_util.c thp_settings.c $(OUTPUT)/uffd-stress: uffd-common.c $(OUTPUT)/uffd-unit-tests: uffd-common.c +$(OUTPUT)/workingset_report_test: workingset_report.c + ifeq ($(ARCH),x86_64) BINARIES_32 := $(patsubst %,$(OUTPUT)/%,$(BINARIES_32)) BINARIES_64 := $(patsubst %,$(OUTPUT)/%,$(BINARIES_64)) diff --git a/tools/testing/selftests/mm/workingset_report.c b/tools/testing/selftests/mm/workingset_report.c new file mode 100644 index 000000000000..93387f0f30ee --- /dev/null +++ b/tools/testing/selftests/mm/workingset_report.c @@ -0,0 +1,315 @@ +// SPDX-License-Identifier: GPL-2.0 +#include "workingset_report.h" + +#include +#include +#include +#include +#include +#include +#include +#include + +#define SYSFS_NODE_ONLINE "/sys/devices/system/node/online" +#define PROC_DROP_CACHES "/proc/sys/vm/drop_caches" + +/* Returns read len on success, or -errno on failure. */ +static ssize_t read_text(const char *path, char *buf, size_t max_len) +{ + ssize_t len; + int fd, err; + size_t bytes_read = 0; + + if (!max_len) + return -EINVAL; + + fd = open(path, O_RDONLY); + if (fd < 0) + return -errno; + + while (bytes_read < max_len - 1) { + len = read(fd, buf + bytes_read, max_len - 1 - bytes_read); + + if (len <= 0) + break; + bytes_read += len; + } + + buf[bytes_read] = '\0'; + + err = -errno; + close(fd); + return len < 0 ? err : bytes_read; +} + +/* Returns written len on success, or -errno on failure. */ +static ssize_t write_text(const char *path, const char *buf, ssize_t max_len) +{ + int fd, len, err; + size_t bytes_written = 0; + + fd = open(path, O_WRONLY | O_APPEND); + if (fd < 0) + return -errno; + + while (bytes_written < max_len) { + len = write(fd, buf + bytes_written, max_len - bytes_written); + + if (len < 0) + break; + bytes_written += len; + } + + err = -errno; + close(fd); + return len < 0 ? err : bytes_written; +} + +static long read_num(const char *path) +{ + char buf[21]; + + if (read_text(path, buf, sizeof(buf)) <= 0) + return -1; + return (long)strtoul(buf, NULL, 10); +} + +static int write_num(const char *path, unsigned long n) +{ + char buf[21]; + + sprintf(buf, "%lu", n); + if (write_text(path, buf, strlen(buf)) < 0) + return -1; + return 0; +} + +long sysfs_get_refresh_interval(int nid) +{ + char file[128]; + + snprintf( + file, + sizeof(file), + "/sys/devices/system/node/node%d/workingset_report/refresh_interval", + nid); + return read_num(file); +} + +int sysfs_set_refresh_interval(int nid, long interval) +{ + char file[128]; + + snprintf( + file, + sizeof(file), + "/sys/devices/system/node/node%d/workingset_report/refresh_interval", + nid); + return write_num(file, interval); +} + +int sysfs_get_page_age_intervals_str(int nid, char *buf, int len) +{ + char path[128]; + + snprintf( + path, + sizeof(path), + "/sys/devices/system/node/node%d/workingset_report/page_age_intervals", + nid); + return read_text(path, buf, len); + +} + +int sysfs_set_page_age_intervals_str(int nid, const char *buf, int len) +{ + char path[128]; + + snprintf( + path, + sizeof(path), + "/sys/devices/system/node/node%d/workingset_report/page_age_intervals", + nid); + return write_text(path, buf, len); +} + +int sysfs_set_page_age_intervals(int nid, const char *intervals[], + int nr_intervals) +{ + char file[128]; + char buf[1024]; + int i; + int err, len = 0; + + for (i = 0; i < nr_intervals; ++i) { + err = snprintf(buf + len, sizeof(buf) - len, "%s", intervals[i]); + + if (err < 0) + return err; + len += err; + + if (i < nr_intervals - 1) { + err = snprintf(buf + len, sizeof(buf) - len, ","); + if (err < 0) + return err; + len += err; + } + } + + snprintf( + file, + sizeof(file), + "/sys/devices/system/node/node%d/workingset_report/page_age_intervals", + nid); + return write_text(file, buf, len); +} + +int get_nr_nodes(void) +{ + char buf[22]; + char *found; + + if (read_text(SYSFS_NODE_ONLINE, buf, sizeof(buf)) <= 0) + return -1; + found = strstr(buf, "-"); + if (found) + return (int)strtoul(found + 1, NULL, 10) + 1; + return (long)strtoul(buf, NULL, 10) + 1; +} + +int drop_pagecache(void) +{ + return write_num(PROC_DROP_CACHES, 1); +} + +ssize_t sysfs_page_age_read(int nid, char *buf, size_t len) + +{ + char file[128]; + + snprintf(file, + sizeof(file), + "/sys/devices/system/node/node%d/workingset_report/page_age", + nid); + return read_text(file, buf, len); +} + +/* + * Finds the first occurrence of "N\n" + * Modifies buf to terminate before the next occurrence of "N". + * Returns a substring of buf starting after "N\n" + */ +char *page_age_split_node(char *buf, int nid, char **next) +{ + char node_str[5]; + char *found; + int node_str_len; + + node_str_len = snprintf(node_str, sizeof(node_str), "N%u\n", nid); + + /* find the node prefix first */ + found = strstr(buf, node_str); + if (!found) { + fprintf(stderr, "cannot find '%s' in page_idle_age", node_str); + return NULL; + } + found += node_str_len; + + *next = strchr(found, 'N'); + if (*next) + *(*next - 1) = '\0'; + + return found; +} + +ssize_t page_age_read(const char *buf, const char *interval, int pagetype) +{ + static const char * const type[ANON_AND_FILE] = { "anon=", "file=" }; + char *found; + + found = strstr(buf, interval); + if (!found) { + fprintf(stderr, "cannot find %s in page_age", interval); + return -1; + } + found = strstr(found, type[pagetype]); + if (!found) { + fprintf(stderr, "cannot find %s in page_age", type[pagetype]); + return -1; + } + found += strlen(type[pagetype]); + return (long)strtoul(found, NULL, 10); +} + +static const char *TEMP_FILE = "/tmp/workingset_selftest"; +void cleanup_file_workingset(void) +{ + remove(TEMP_FILE); +} + +int alloc_file_workingset(void *arg) +{ + int err = 0; + char *ptr; + int fd; + int ppid; + char *mapped; + size_t size = (size_t)arg; + size_t page_size = getpagesize(); + + ppid = getppid(); + + fd = open(TEMP_FILE, O_RDWR | O_CREAT); + if (fd < 0) { + err = -errno; + perror("failed to open temp file\n"); + goto cleanup; + } + + if (fallocate(fd, 0, 0, size) < 0) { + err = -errno; + perror("fallocate"); + goto cleanup; + } + + mapped = (char *)mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, + fd, 0); + if (mapped == NULL) { + err = -errno; + perror("mmap"); + goto cleanup; + } + + while (getppid() == ppid) { + sync(); + for (ptr = mapped; ptr < mapped + size; ptr += page_size) + *ptr = *ptr ^ 0xFF; + } + +cleanup: + cleanup_file_workingset(); + return err; +} + +int alloc_anon_workingset(void *arg) +{ + char *buf, *ptr; + int ppid = getppid(); + size_t size = (size_t)arg; + size_t page_size = getpagesize(); + + buf = malloc(size); + + if (!buf) { + fprintf(stderr, "cannot allocate anon workingset"); + exit(1); + } + + while (getppid() == ppid) { + for (ptr = buf; ptr < buf + size; ptr += page_size) + *ptr = *ptr ^ 0xFF; + } + + free(buf); + return 0; +} diff --git a/tools/testing/selftests/mm/workingset_report.h b/tools/testing/selftests/mm/workingset_report.h new file mode 100644 index 000000000000..f72a931298e0 --- /dev/null +++ b/tools/testing/selftests/mm/workingset_report.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef WORKINGSET_REPORT_H_ +#define WORKINGSET_REPORT_H_ + +#define _GNU_SOURCE + +#include +#include +#include +#include +#include + +#define PAGETYPE_ANON 0 +#define PAGETYPE_FILE 1 +#define ANON_AND_FILE 2 + +int get_nr_nodes(void); +int drop_pagecache(void); + +long sysfs_get_refresh_interval(int nid); +int sysfs_set_refresh_interval(int nid, long interval); + +int sysfs_get_page_age_intervals_str(int nid, char *buf, int len); +int sysfs_set_page_age_intervals_str(int nid, const char *buf, int len); + +int sysfs_set_page_age_intervals(int nid, const char *intervals[], + int nr_intervals); + +char *page_age_split_node(char *buf, int nid, char **next); +ssize_t sysfs_page_age_read(int nid, char *buf, size_t len); +ssize_t page_age_read(const char *buf, const char *interval, int pagetype); + +int alloc_file_workingset(void *arg); +void cleanup_file_workingset(void); +int alloc_anon_workingset(void *arg); + +#endif /* WORKINGSET_REPORT_H_ */ diff --git a/tools/testing/selftests/mm/workingset_report_test.c b/tools/testing/selftests/mm/workingset_report_test.c new file mode 100644 index 000000000000..e6e857d8fe35 --- /dev/null +++ b/tools/testing/selftests/mm/workingset_report_test.c @@ -0,0 +1,328 @@ +// SPDX-License-Identifier: GPL-2.0 +#include "workingset_report.h" + +#include +#include +#include +#include + +#include "../clone3/clone3_selftests.h" + +#define REFRESH_INTERVAL 5000 +#define MB(x) (x << 20) + +static void sleep_ms(int milliseconds) +{ + struct timespec ts; + + ts.tv_sec = milliseconds / 1000; + ts.tv_nsec = (milliseconds % 1000) * 1000000; + nanosleep(&ts, NULL); +} + +/* + * Checks if two given values differ by less than err% of their sum. + */ +static inline int values_close(long a, long b, int err) +{ + return abs(a - b) <= (a + b) / 100 * err; +} + +static const char * const PAGE_AGE_INTERVALS[] = { + "6000", "10000", "15000", "18446744073709551615", +}; +#define NR_PAGE_AGE_INTERVALS (ARRAY_SIZE(PAGE_AGE_INTERVALS)) +/* add one for the catch all last interval */ + +static int set_page_age_intervals_all_nodes(const char *intervals, int nr_nodes) +{ + int i; + + for (i = 0; i < nr_nodes; ++i) { + int err = sysfs_set_page_age_intervals_str( + i, &intervals[i * 1024], strlen(&intervals[i * 1024])); + + if (err < 0) + return err; + } + return 0; +} + +static int get_page_age_intervals_all_nodes(char *intervals, int nr_nodes) +{ + int i; + + for (i = 0; i < nr_nodes; ++i) { + int err = sysfs_get_page_age_intervals_str( + i, &intervals[i * 1024], 1024); + + if (err < 0) + return err; + } + return 0; +} + +static int set_refresh_interval_all_nodes(const long *interval, int nr_nodes) +{ + int i; + + for (i = 0; i < nr_nodes; ++i) { + int err = sysfs_set_refresh_interval(i, interval[i]); + + if (err < 0) + return err; + } + return 0; +} + +static int get_refresh_interval_all_nodes(long *interval, int nr_nodes) +{ + int i; + + for (i = 0; i < nr_nodes; ++i) { + long val = sysfs_get_refresh_interval(i); + + if (val < 0) + return val; + interval[i] = val; + } + return 0; +} + +static pid_t clone_and_run(int fn(void *arg), void *arg) +{ + pid_t pid; + + struct __clone_args args = { + .exit_signal = SIGCHLD, + }; + + pid = sys_clone3(&args, sizeof(struct __clone_args)); + + if (pid == 0) + exit(fn(arg)); + + return pid; +} + +static int read_workingset(int pagetype, int nid, + unsigned long page_age[NR_PAGE_AGE_INTERVALS]) +{ + int i, err; + char buf[4096]; + + err = sysfs_page_age_read(nid, buf, sizeof(buf)); + if (err < 0) + return err; + + for (i = 0; i < NR_PAGE_AGE_INTERVALS; ++i) { + err = page_age_read(buf, PAGE_AGE_INTERVALS[i], pagetype); + if (err < 0) + return err; + page_age[i] = err; + } + + return 0; +} + +static ssize_t read_interval_all_nodes(int pagetype, int interval) +{ + int i, err; + unsigned long page_age[NR_PAGE_AGE_INTERVALS]; + ssize_t ret = 0; + int nr_nodes = get_nr_nodes(); + + for (i = 0; i < nr_nodes; ++i) { + err = read_workingset(pagetype, i, page_age); + if (err < 0) + return err; + + ret += page_age[interval]; + } + + return ret; +} + +#define TEST_SIZE MB(500l) + +static int run_test(int f(void)) +{ + int i, err, test_result; + long *old_refresh_intervals; + long *new_refresh_intervals; + char *old_page_age_intervals; + int nr_nodes = get_nr_nodes(); + + if (nr_nodes <= 0) { + fprintf(stderr, "failed to get nr_nodes\n"); + return KSFT_FAIL; + } + + old_refresh_intervals = calloc(nr_nodes, sizeof(long)); + new_refresh_intervals = calloc(nr_nodes, sizeof(long)); + old_page_age_intervals = calloc(nr_nodes, 1024); + + if (!(old_refresh_intervals && new_refresh_intervals && + old_page_age_intervals)) { + fprintf(stderr, "failed to allocate memory for intervals\n"); + return KSFT_FAIL; + } + + err = get_refresh_interval_all_nodes(old_refresh_intervals, nr_nodes); + if (err < 0) { + fprintf(stderr, "failed to read refresh interval\n"); + return KSFT_FAIL; + } + + err = get_page_age_intervals_all_nodes(old_page_age_intervals, nr_nodes); + if (err < 0) { + fprintf(stderr, "failed to read page age interval\n"); + return KSFT_FAIL; + } + + for (i = 0; i < nr_nodes; ++i) + new_refresh_intervals[i] = REFRESH_INTERVAL; + err = set_refresh_interval_all_nodes(new_refresh_intervals, nr_nodes); + if (err < 0) { + fprintf(stderr, "failed to set refresh interval\n"); + test_result = KSFT_FAIL; + goto fail; + } + + for (i = 0; i < nr_nodes; ++i) { + err = sysfs_set_page_age_intervals(i, PAGE_AGE_INTERVALS, + NR_PAGE_AGE_INTERVALS - 1); + if (err < 0) { + fprintf(stderr, "failed to set page age interval\n"); + test_result = KSFT_FAIL; + goto fail; + } + } + + sync(); + drop_pagecache(); + + test_result = f(); + +fail: + err = set_refresh_interval_all_nodes(old_refresh_intervals, nr_nodes); + if (err < 0) { + fprintf(stderr, "failed to restore refresh interval\n"); + test_result = KSFT_FAIL; + } + err = set_page_age_intervals_all_nodes(old_page_age_intervals, nr_nodes); + if (err < 0) { + fprintf(stderr, "failed to restore page age interval\n"); + test_result = KSFT_FAIL; + } + return test_result; +} + +static int test_file(void) +{ + ssize_t ws_size_ref, ws_size_test; + int ret = KSFT_FAIL, i; + pid_t pid = 0; + + ws_size_ref = read_interval_all_nodes(PAGETYPE_FILE, 0); + if (ws_size_ref < 0) + goto cleanup; + + pid = clone_and_run(alloc_file_workingset, (void *)TEST_SIZE); + if (pid < 0) + goto cleanup; + + read_interval_all_nodes(PAGETYPE_FILE, 0); + sleep_ms(REFRESH_INTERVAL); + + for (i = 0; i < 3; ++i) { + sleep_ms(REFRESH_INTERVAL); + ws_size_test = read_interval_all_nodes(PAGETYPE_FILE, 0); + + if (!values_close(ws_size_test - ws_size_ref, TEST_SIZE, 10)) { + fprintf(stderr, + "file working set size difference too large: actual=%ld, expected=%ld\n", + ws_size_test - ws_size_ref, TEST_SIZE); + goto cleanup; + } + } + ret = KSFT_PASS; + +cleanup: + if (pid > 0) + kill(pid, SIGKILL); + cleanup_file_workingset(); + return ret; +} + +static int test_anon(void) +{ + ssize_t ws_size_ref, ws_size_test; + pid_t pid = 0; + int ret = KSFT_FAIL, i; + + ws_size_ref = read_interval_all_nodes(PAGETYPE_ANON, 0); + if (ws_size_ref < 0) + goto cleanup; + + pid = clone_and_run(alloc_anon_workingset, (void *)TEST_SIZE); + if (pid < 0) + goto cleanup; + + sleep_ms(REFRESH_INTERVAL); + read_interval_all_nodes(PAGETYPE_ANON, 0); + + for (i = 0; i < 5; ++i) { + sleep_ms(REFRESH_INTERVAL); + ws_size_test = read_interval_all_nodes(PAGETYPE_ANON, 0); + if (ws_size_test < 0) + goto cleanup; + + if (!values_close(ws_size_test - ws_size_ref, TEST_SIZE, 10)) { + fprintf(stderr, + "anon working set size difference too large: actual=%ld, expected=%ld\n", + ws_size_test - ws_size_ref, TEST_SIZE); + /* goto cleanup; */ + } + } + ret = KSFT_PASS; + +cleanup: + if (pid > 0) + kill(pid, SIGKILL); + return ret; +} + + +#define T(x) { x, #x } +struct workingset_test { + int (*fn)(void); + const char *name; +} tests[] = { + T(test_anon), + T(test_file), +}; +#undef T + +int main(int argc, char **argv) +{ + int ret = EXIT_SUCCESS, i, err; + + for (i = 0; i < ARRAY_SIZE(tests); i++) { + err = run_test(tests[i].fn); + switch (err) { + case KSFT_PASS: + ksft_test_result_pass("%s\n", tests[i].name); + break; + case KSFT_SKIP: + ksft_test_result_skip("%s\n", tests[i].name); + break; + default: + ret = EXIT_FAILURE; + ksft_test_result_fail("%s with error %d\n", + tests[i].name, err); + break; + } + } + return ret; +}