From patchwork Wed Nov 27 08:21:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13887192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B031DD6ACF3 for ; Wed, 27 Nov 2024 16:28:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 451466B0088; Wed, 27 Nov 2024 11:28:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3DA676B0089; Wed, 27 Nov 2024 11:28:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 252C06B008C; Wed, 27 Nov 2024 11:28:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 052486B0088 for ; Wed, 27 Nov 2024 11:28:46 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A2214C1B68 for ; Wed, 27 Nov 2024 16:28:46 +0000 (UTC) X-FDA: 82832408400.19.72FC5E0 Received: from mail-qt1-f178.google.com (mail-qt1-f178.google.com [209.85.160.178]) by imf02.hostedemail.com (Postfix) with ESMTP id BA76F8000E for ; Wed, 27 Nov 2024 16:28:33 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=hUKo0UKx; spf=pass (imf02.hostedemail.com: domain of gourry@gourry.net designates 209.85.160.178 as permitted sender) smtp.mailfrom=gourry@gourry.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732724918; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=prxwR8ozRxMxxXvOfV2zxhvLnEBIflQoHb028mkGmho=; b=aeA1EtqzK9ZqDfLraxJsfJiI4Pvh2YsMiCCnLR7M1h9hB93fh3jGZUkiDzdKpvXGd5YuBZ m/dHjgDFMf0hmadF8PHluKy7/LpltuvF7BSm3F/yKfZcRfKxOOk0YxivLlyOhFOP94yqNo lj7JJHVOpHwLpJ98c+dzroyqFDDO4EQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732724918; a=rsa-sha256; cv=none; b=V5FuYe+Juk11Q9h3Hdx78tQAyZ+67apN2LgzTEo7pavoFZUj1Zgjor4bswkWqYDoHUayqF n2ZeS+wS6RCX9NkIK7E+DSnKkjliPUm1xZsKsjn/GTfyWlQi9AXPUJjH+q8Vnm8fNW5JRk 9gDTvIcX+jFmu/ZbSm5AKvmCl9qbZbQ= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=hUKo0UKx; spf=pass (imf02.hostedemail.com: domain of gourry@gourry.net designates 209.85.160.178 as permitted sender) smtp.mailfrom=gourry@gourry.net; dmarc=none Received: by mail-qt1-f178.google.com with SMTP id d75a77b69052e-46677ef6920so8899201cf.0 for ; Wed, 27 Nov 2024 08:28:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gourry.net; s=google; t=1732724924; x=1733329724; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=prxwR8ozRxMxxXvOfV2zxhvLnEBIflQoHb028mkGmho=; b=hUKo0UKxvkR99U8MxJQYovttP706sNOZfQ07zWjfWrqD3XKGt61YlM9zhkvkRfUuV4 BKctux5c61KMPqqn4mS1S8thIoXKRgNnE3XLdw2Sn7LBtCpVbdB4peXwFjoXGhyy+FEH AKAc5zrGBeDgfAcN3e3MvqJrNlipUdUyQ/oLxQq/UVp0ntcDkxzAnqaxV1Q7wFfjjQDu hJ/9u059IzX14zEFNQQfVU4nvZinnWp0MPKSBR2pODZeCRbjgEWXdMQdPxWi0iqCCD1o 1ugZDfCCfc/uqnFsPAJ0FHeg5piHVpiHp7znp/ke7sDZQbJiU19dBK0RQDmalrl4YH6p LEog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732724924; x=1733329724; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=prxwR8ozRxMxxXvOfV2zxhvLnEBIflQoHb028mkGmho=; b=KFE/0dNwOBugXXIaEnT4ENsgmARGO9vIfJeX6hbi69i7rNJ6ChptegPIpEQaqtFOrP DPcDyu1OIypX9QOrM01RF1Ifaw93ofDO+0YFNRqVMKQ90V8f1DTEkTxeHNHvVlPcSiDb HE0PsxRdlFR4nTPVsJ7U4ql+FoJWGfsHhVBD4IZw+/FifN+jXZsdLR/Hkk/d8GQz8Y/U IqTNpiZCBuIlocGWb/dMFl+/f0C1Ir1UNkI2Ff+HdycYg7qATHtgBq1Wt+9KL1obXbSM OLtJICE0YHvgTqHkvpjUwVAqeqyX98ys66bTNOCDD/SY8dTivSAtqQN7ymThZL8eisSy /IGg== X-Gm-Message-State: AOJu0YzZbTmQK8dyELLpIO8sHYojEe8kdNY0UXL8Nw0nKfzWy6E2rcfM FxACIO1PYO3JiWhzosdt/VHoOdFCHEHOQO3hW2zVwewFnZgOkYDJBtay6ryHLjrof7qnPkqU+oD h X-Gm-Gg: ASbGnctzSEQX1LEfNiIAv0HeoWhh/wxyWn/PhAPvJMEPmPMA5iCvAJ+vWOijHCn7TmX kdrJ81MSn8WtFi8ScRXGHxspUQZKAYWRE4DlKSpQ23OiMxmQtmb4XaCwFNRpxV7ZKi9keOo7LbR 4pf07OHqagAlWOiWn7qOUrHt0ZGwDeme9+i0glqn5x9PFq624aGdbGGBjk6agr4meUQdZmv7wsW f7Nqc1QeW3Os7XC11NgqVheIVIAoNiJ3ipbERWegCk/IphZbz7D3Rd1Zd0Blcpo7TbLgz03ZV9C F540vhV2Vkr0IhaN7Lof8IhuHeBVtZBmE1Q= X-Google-Smtp-Source: AGHT+IEVCaFgyRliI/ntnaM1UvjIzFe3wAKf9UlR8ZwU776bFJXRF7ZXcH88bA7pVNDHWLMNqTde5w== X-Received: by 2002:ac8:59d0:0:b0:466:92e1:37e1 with SMTP id d75a77b69052e-466a3bc762emr134984061cf.26.1732724923686; Wed, 27 Nov 2024 08:28:43 -0800 (PST) Received: from PC2K9PVX.TheFacebook.com (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-466871721c2sm45002921cf.17.2024.11.27.08.28.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 08:28:42 -0800 (PST) From: Gregory Price To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, nehagholkar@meta.com, abhishekd@meta.com, kernel-team@meta.com, david@redhat.com, ying.huang@intel.com, nphamcs@gmail.com, gourry@gourry.net, akpm@linux-foundation.org, hannes@cmpxchg.org, feng.tang@intel.com, kbusch@meta.com Subject: [PATCH 1/4] migrate: Allow migrate_misplaced_folio APIs without a VMA Date: Wed, 27 Nov 2024 03:21:58 -0500 Message-ID: <20241127082201.1276-2-gourry@gourry.net> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241127082201.1276-1-gourry@gourry.net> References: <20241127082201.1276-1-gourry@gourry.net> MIME-Version: 1.0 X-Rspamd-Queue-Id: BA76F8000E X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: jyeeidw3hufgwd7kfdcrst1wwbeosihp X-HE-Tag: 1732724913-892276 X-HE-Meta: U2FsdGVkX198wG5Q6kfrHOdxxD/XRU+r+vFJSudG3/dniURQLZbhnuZxYLE/Eys11n+L9F06mtM4gly5iNvJlIxVigzt+kegtfHxhKRaZitUq2Bpdsa+Hd0bul65RmWxGW/8SOr17MF7ixmCOV2Hlxzne8524sMp8Nbzz/Y+/WsVu52kEQRwBLCDyDS9b2M2jpIC/HdZ1T+kYfOnLvElI7Qq9RbDwtAIPLHitOjLiGjcqZL4Hkz1bfi/cZLGfCJTDm8j4JzNsfN3hgl8lo1QuwSalnsqjvOt6HH74H5sjWngrUllZ7YebtpjU3MCvfB9iCZVuo+YNWYookNRQ+4WJLj6IRS2EsHcB4jRGN0+wiYbmmT3wZAPFHbb45EWzSxiL77Ae9+WtwkGNseOEVROdwbtkqEJ0NOZzn3xjUDe6P0aZoUiH4rlLzBea6Ec6XzXB4tY9/o6XVwW0IM5aESNE2Aersg4ZpepPWVNOds0rKcVDtlwkusuaxqT9Nd37Ic4AKSuSGkdBM50VI1R9jFNm+co4tnlGU76OHKL7XEjIJ5t+14i7n9tbQT4PnZqlobbGCmFtbbjpvy1LQgRvRnjYErVNa8iMq+uhZ1mcXL3RqiKuV3S944N3F7CFyHQyg+kJVclDaI8vGY1E454IXsWom0YK6TpYsKQeKneyaxN3EkAJDft/v9g05NfJm1se+WzdpQBPWjyYjJDP4N3NEzx4U/aww9oxldmy3mPScA+vzG7JN1w5RxprDVJvJE0oZXgbjILU4zDIgbfIZ4cMRp+HUBn9yILdPGgtdFvn3Qa/JbPCxsJB7KxnMWn43o1sSBLrsPW2lBSDQGOL2tbpgd4+HNAgI1KH3Wb/SU93zxP4LLwdnfamnDtVkuTjOZ3bVJGeeGNq6DrV3Sm4t/mACdGfbpNpywcFlh4ndGq66yvNBWSgNwDn0TTbs3QSE9jgWLNWCKoJzvhY6PNAXyXo6G RjwhaHMc w9Y3Bpzsr0MnCZOm/1IOlPmVWNGImIMEo0GyGPlJ9pQtSUQ/riUydJG/7srBDHofleTAnd7yoU+0sHvaqfrwuqzIfWC70aBRc//v1lqbwELu0vzx567/oufkKDDgwZK4WMJH0YkhSY6n3lRClR4bBhhZx7fRZw6hMEZ6AaLHZeLWPNUnp0BQNhEkOfZTmgbaSnWt1Ifq+6uFa7GUtDjrWQgZKa/xrD0GfAsowE8O71qEsZGtpUUnWazupp5FvU4mgyrkzGBkGDHS7XLHQkzwzfM3EK6chA+mDRxxXbUiyH3DXsRTW6qEYP/znc4dDtW0WVgDYIzHasTjsoYORuPR3sTis3WEYOXevP5KRUiPRCkahNShmPss0eA3tPo259gjfwIl9NKrQQlXf/nGe+4Wh8WvjnA9H1/uU6Kmk X-Bogosity: Ham, tests=bogofilter, spamicity=0.004164, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To migrate unmapped pagecache folios, migrate_misplaced_folio and migrate_misplaced_folio_prepare must handle folios without VMAs. migrate_misplaced_folio_prepare checks VMA for exec bits, so allow a NULL VMA when it does not have a mapping. migrate_misplaced_folio must call migrate_pages with MIGRATE_SYNC when in the pagecache path because it is a synchronous context. Suggested-by: Johannes Weiner Signed-off-by: Gregory Price --- mm/migrate.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/migrate.c b/mm/migrate.c index dfb5eba3c522..3b0bd3f21ac3 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2632,7 +2632,7 @@ int migrate_misplaced_folio_prepare(struct folio *folio, * See folio_likely_mapped_shared() on possible imprecision * when we cannot easily detect if a folio is shared. */ - if ((vma->vm_flags & VM_EXEC) && + if (vma && (vma->vm_flags & VM_EXEC) && folio_likely_mapped_shared(folio)) return -EACCES; From patchwork Wed Nov 27 08:21:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13887193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E23D9D6ACF2 for ; Wed, 27 Nov 2024 16:28:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 625746B008C; Wed, 27 Nov 2024 11:28:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5AF4F6B0092; Wed, 27 Nov 2024 11:28:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38CE86B0093; Wed, 27 Nov 2024 11:28:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0D45B6B008C for ; Wed, 27 Nov 2024 11:28:49 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B727F819CD for ; Wed, 27 Nov 2024 16:28:48 +0000 (UTC) X-FDA: 82832408442.01.EA40152 Received: from mail-qk1-f169.google.com (mail-qk1-f169.google.com [209.85.222.169]) by imf06.hostedemail.com (Postfix) with ESMTP id BF8AA18000A for ; Wed, 27 Nov 2024 16:28:42 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=hGJijRtS; spf=pass (imf06.hostedemail.com: domain of gourry@gourry.net designates 209.85.222.169 as permitted sender) smtp.mailfrom=gourry@gourry.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732724923; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1YVRetr6ViR4L1BBUGXRQMyVoFD2gs0Ovt3TmolcbLM=; b=vZMyWanO1McwBd8RKMS0YFX58AONZyT+buYNCuK6jOjAduBc9++BbQLb+EgVMckWN5Lo0m 2dHb+LsjIy40OIp7fMOcrVdzX9fRohJ2WJkiBB2rAPCd+qLafKTApq3gdtocVs8Q/MZvUE hHfUukFWZ6QnfcAuCaHry+IyBJk25cQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732724923; a=rsa-sha256; cv=none; b=r7dvAlM61JqDAaEjinJnrnV6x4eL6HmVVZyXdTxMOxF4SgMKuShoI8bSI5RKrWpOfpc/c1 CIwItChv9/0gr03nr7bUpeQRRKAE9RqbBNVWRkDxw6njhTveetJkR5jKANQlUBHsl5futR QeVgkdD0RqY2EtIElebfW6ssyC/KfXw= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=hGJijRtS; spf=pass (imf06.hostedemail.com: domain of gourry@gourry.net designates 209.85.222.169 as permitted sender) smtp.mailfrom=gourry@gourry.net; dmarc=none Received: by mail-qk1-f169.google.com with SMTP id af79cd13be357-7b15d330ce1so478070585a.1 for ; Wed, 27 Nov 2024 08:28:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gourry.net; s=google; t=1732724925; x=1733329725; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1YVRetr6ViR4L1BBUGXRQMyVoFD2gs0Ovt3TmolcbLM=; b=hGJijRtS7twiQbxAQfPCjI4lTFdX6gre+rMN+HDMLExxYf8NuzIgDyLCSIfFs+w3PM IMRslbyAiojpI/GUrI07gMB4OBPmtu9g7IBbzYy+VF8AbT5as4UHUsJpOeGFRNe4Yt2Y TY+VI9RafLQehe+fZ3EZkR/zPHVAAmjh0hWI6pdnThQTxPN4sVFojOqr3tEM/gRnun9T GujdLL6GEo7dKWdqbJje4rHiOvOqiGpHzUf8/jt4sgFFmUJVoMMGV2uMT9dzEW6WFs88 4aKWabDjOJMBMd9BrSaqGFSdCAraRvFpQpouxXPlRBQNMTxOfcx25TEGdE2IrE1wlbN8 hqTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732724925; x=1733329725; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1YVRetr6ViR4L1BBUGXRQMyVoFD2gs0Ovt3TmolcbLM=; b=PMvyBkSewM48AldBlG3h8jVwsygTMjqwhDuUhdDh8x1abe9P3K70WPnoYhZkHL8L6O 86DyemUqPku6VwxnlCkb2BklcK/a/MlrCinGzOMn+6K950y/C1P+fP7tMcwbnsPxQdT6 eyZilx2L2TOWDyCo3ZwU0Tpwy7WkHcgrwU6CX5rjDwtJKg7gFl4CFJ8/7B7RJOnQa5ss ZNJTtNLSfW9ihkwMqX4wggbuSVbA9E31/EKSKxyP7BfO3dqjX1vlex2aoXJ0z4h46DXr LX76njV6Qn1DzbbIlCkVH+HmHqu00i7gZukG66+jG94ShY0+O5cKnrkHhbYIYD+WzfYF KGbQ== X-Gm-Message-State: AOJu0YzKa5LYR8I8E+RWWANwVl0xhQryFQtvqYgKqOA6sOA44oMx9G6j XK3rbJF5qZY9GNIeVvr3M26nhf1s/ATmNkxgbzs2l8L4ISFg9slzud+RkohGkWagClVGs3NCmJ4 E X-Gm-Gg: ASbGnctmESBmptSRkfPifG56Jo3aEOMY9YIXRmbkVY4UFuDDRC6/2tGI6Y75srh6rpG 02kszTzxEZuJp1Q0Bc9xq3/aiEr07tCvhdM5MWw8NvJ2OVG1qfrJnSpDmk6pFEbU+hcyHmhcp0Y RgnS08tsxJLK5470LARFOiwv1MC5MSgUximMfuZN2SllD+il+w1uu3x3S7BEWo6ArQg2U/0u/67 bukDTnEuy0G8UZwdKyLXLO5IJ4IwCAgCkCCPGsS0MRFEzNSN/57VqOSbI8BxyeJAo6AUBWXs1Do ZHzmoiGFQB4zrlBPHL/aqEzEJ6n0PU5prRc= X-Google-Smtp-Source: AGHT+IE7MyR6097VIWfq3+is83/Yc33iicre1dSbm4nHjYBDTL/wY941BZN4EnUeI4leGNLYuU1nKQ== X-Received: by 2002:a05:620a:2915:b0:7b3:51a5:556 with SMTP id af79cd13be357-7b67c292a7cmr475141385a.22.1732724925502; Wed, 27 Nov 2024 08:28:45 -0800 (PST) Received: from PC2K9PVX.TheFacebook.com (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-466871721c2sm45002921cf.17.2024.11.27.08.28.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 08:28:45 -0800 (PST) From: Gregory Price To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, nehagholkar@meta.com, abhishekd@meta.com, kernel-team@meta.com, david@redhat.com, ying.huang@intel.com, nphamcs@gmail.com, gourry@gourry.net, akpm@linux-foundation.org, hannes@cmpxchg.org, feng.tang@intel.com, kbusch@meta.com Subject: [PATCH 2/4] memory: allow non-fault migration in numa_migrate_check path Date: Wed, 27 Nov 2024 03:21:59 -0500 Message-ID: <20241127082201.1276-3-gourry@gourry.net> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241127082201.1276-1-gourry@gourry.net> References: <20241127082201.1276-1-gourry@gourry.net> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: BF8AA18000A X-Stat-Signature: f5m9tgxqnpfuhks6ckn4ir3md4oqfi4f X-Rspam-User: X-HE-Tag: 1732724922-623147 X-HE-Meta: U2FsdGVkX19xSiCXfKbfpTWRLuSlmZRXFIgC2l19BIp3LYahofXQUB5c09Ez0n/n42ZHXIVEoFOEcPhJqGP7Jy/+xGTHnvl4i4QgvBEQk0NjQkO3ZBGWTKeqL62QWx+yNb7X8EgT/hJeW8IiX5++obZNktiKC1P8X3zKTGjjKx5G+z+mFkjpG32L3bAw/HiufUBZ+Zlq0YiUnOKx2q4fOk2NTUmAS5/F3Bv5e3Nr7OJRgHccgwonSopDU8WsObywx4qAC3ckg3JL+2slC9qbFv8EA3NJ5UDvxBfT8l9WPDZbbNq6ykOmuiDFOpRnlY1gEAuJoU/llo8FkJO1xd7pY2rHvLC9d8R9WRdzVioHwT5vw0wxQOAHdosAIJ8P1z3Zwc7bW/WUOm4owjpwwUTQaz0Uf0Hgf2xh2HbFiZGvec59qokaNxukRdSo25MdFJ1NJWrQTIuT7KRPNqH7U/eLt2KtdjBvDEqLwvM99utOaBDaUrVb2vNHSgdgUEe1vAeSym3M2G/vKveZo4UcK8U5OpOPDke/93tjmd86D8fcVV7evpbtrOS3hW1gUK6wzgmkpTUfdrxINZ4dxMYIg5PYp/OyApO5ctIP+QT26bmzgzj/jPPFUjfFgs8/IkEaiEVCz0mUrBJGHqU7ro+FiOedqetZjgtDhKCvfySCUV0Qc8ObyigTeTDEaEFhHb8HJ9ctwH6+z9mE13o/4fSNdG4XulDx+sBEGtsoR0zGby0BL+bnwMhhwaOHbitaeg8VI37lnpUYs4pXn0yhuzTXHyZwQuqBNvEFKawCuuAWXVvBdewuf1g3YVzK2KfG7VkvsBr2Bnt14PFlYHRzIGf3TG8r4VTnyhTSpEhrpDyCXmHI5jEQsnO/I3o4L7AsHABiu9foYwhMvgjkh5DbrPWk1eS12PiWkW6qsYjtawYRTc6r8E8WetucffW1vQhh66k9UDkpqWX45SKg2JzTEH4gCp6 yCRGkf7X tZvkheXJpRxD42aSdQ8xTQVfkD2tsJrrIkw7da5ULnGJZgjZmPXmJ1ea5W5S0MnVkpAxTilYNwERdGDsqZiTZ2QKYPpB/nMYDXQ2TonRNYSAtENBGJn8U7wMxexLReot2mR11ZasHNmCySeaStYX5LtLQnyXgE3TRPQOAsBv0yH978+I1QjnXA7PxXKhLvDgwD3mx3afB/VvjvAXlnhYWL/dSf3iG8eRDOuTCROxiDmb+w6UPCVBTx/T0le1/i3jfACw2lE4Tx90xLCXRVrAfrhl50ih1th19dKTaaU1LinsMClPMWmrl6CrBDncBTCL/FlpAkCuuFeSaG96EmMEaxjxOkZ3n+vvoT8zc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: numa_migrate_check and mpol_misplaced presume callers are in the fault path with accessed to a VMA. To enable migrations from page cache, re-using the same logic to handle migration prep is preferable. Mildly refactor numa_migrate_check and mpol_misplaced so that they may be called with (vmf = NULL) from non-faulting paths. Also move from numa balancing defines inside the appropriate ifdef. Signed-off-by: Gregory Price --- mm/memory.c | 28 ++++++++++++++++------------ mm/mempolicy.c | 25 +++++++++++++++++-------- 2 files changed, 33 insertions(+), 20 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 209885a4134f..a373b6ad0b34 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5471,7 +5471,20 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, unsigned long addr, int *flags, bool writable, int *last_cpupid) { - struct vm_area_struct *vma = vmf->vma; + if (vmf) { + struct vm_area_struct *vma = vmf->vma; + const vm_flags_t vmflags = vma->vm_flags; + + /* + * Flag if the folio is shared between multiple address spaces. + * This used later when determining whether to group tasks. + */ + if (folio_likely_mapped_shared(folio)) + *flags |= vmflags & VM_SHARED ? TNF_SHARED : 0; + + /* Record the current PID acceesing VMA */ + vma_set_access_pid_bit(vma); + } /* * Avoid grouping on RO pages in general. RO pages shouldn't hurt as @@ -5484,12 +5497,6 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, if (!writable) *flags |= TNF_NO_GROUP; - /* - * Flag if the folio is shared between multiple address spaces. This - * is later used when determining whether to group tasks together - */ - if (folio_likely_mapped_shared(folio) && (vma->vm_flags & VM_SHARED)) - *flags |= TNF_SHARED; /* * For memory tiering mode, cpupid of slow memory page is used * to record page access time. So use default value. @@ -5499,17 +5506,14 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, else *last_cpupid = folio_last_cpupid(folio); - /* Record the current PID acceesing VMA */ - vma_set_access_pid_bit(vma); - - count_vm_numa_event(NUMA_HINT_FAULTS); #ifdef CONFIG_NUMA_BALANCING + count_vm_numa_event(NUMA_HINT_FAULTS); count_memcg_folio_events(folio, NUMA_HINT_FAULTS, 1); -#endif if (folio_nid(folio) == numa_node_id()) { count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL); *flags |= TNF_FAULT_LOCAL; } +#endif return mpol_misplaced(folio, vmf, addr); } diff --git a/mm/mempolicy.c b/mm/mempolicy.c index bb37cd1a51d8..eb6c97bccea3 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2727,12 +2727,16 @@ static void sp_free(struct sp_node *n) * mpol_misplaced - check whether current folio node is valid in policy * * @folio: folio to be checked - * @vmf: structure describing the fault + * @vmf: structure describing the fault (NULL if called outside fault path) * @addr: virtual address in @vma for shared policy lookup and interleave policy + * Ignored if vmf is NULL. * * Lookup current policy node id for vma,addr and "compare to" folio's - * node id. Policy determination "mimics" alloc_page_vma(). - * Called from fault path where we know the vma and faulting address. + * node id - or task's policy node id if vmf is NULL. Policy determination + * "mimics" alloc_page_vma(). + * + * vmf must be non-NULL if called from fault path where we know the vma and + * faulting address. The PTL must be held by caller if vmf is not NULL. * * Return: NUMA_NO_NODE if the page is in a node that is valid for this * policy, or a suitable node ID to allocate a replacement folio from. @@ -2744,7 +2748,6 @@ int mpol_misplaced(struct folio *folio, struct vm_fault *vmf, pgoff_t ilx; struct zoneref *z; int curnid = folio_nid(folio); - struct vm_area_struct *vma = vmf->vma; int thiscpu = raw_smp_processor_id(); int thisnid = numa_node_id(); int polnid = NUMA_NO_NODE; @@ -2754,18 +2757,24 @@ int mpol_misplaced(struct folio *folio, struct vm_fault *vmf, * Make sure ptl is held so that we don't preempt and we * have a stable smp processor id */ - lockdep_assert_held(vmf->ptl); - pol = get_vma_policy(vma, addr, folio_order(folio), &ilx); + if (vmf) { + lockdep_assert_held(vmf->ptl); + pol = get_vma_policy(vmf->vma, addr, folio_order(folio), &ilx); + } else { + pol = get_task_policy(current); + } if (!(pol->flags & MPOL_F_MOF)) goto out; switch (pol->mode) { case MPOL_INTERLEAVE: - polnid = interleave_nid(pol, ilx); + polnid = vmf ? interleave_nid(pol, ilx) : + interleave_nodes(pol); break; case MPOL_WEIGHTED_INTERLEAVE: - polnid = weighted_interleave_nid(pol, ilx); + polnid = vmf ? weighted_interleave_nid(pol, ilx) : + weighted_interleave_nodes(pol); break; case MPOL_PREFERRED: From patchwork Wed Nov 27 08:22:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13887194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93EB5D6ACF4 for ; Wed, 27 Nov 2024 16:28:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BF2106B0092; Wed, 27 Nov 2024 11:28:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BA3866B0093; Wed, 27 Nov 2024 11:28:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A1C856B0095; Wed, 27 Nov 2024 11:28:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 821D86B0092 for ; Wed, 27 Nov 2024 11:28:50 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 0F9E5161889 for ; Wed, 27 Nov 2024 16:28:50 +0000 (UTC) X-FDA: 82832408694.04.6E073C4 Received: from mail-qt1-f177.google.com (mail-qt1-f177.google.com [209.85.160.177]) by imf13.hostedemail.com (Postfix) with ESMTP id 7BD6B20019 for ; Wed, 27 Nov 2024 16:28:39 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=GTv2b9s7; dmarc=none; spf=pass (imf13.hostedemail.com: domain of gourry@gourry.net designates 209.85.160.177 as permitted sender) smtp.mailfrom=gourry@gourry.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732724922; a=rsa-sha256; cv=none; b=bjZO+ZOxBZW6wJXmmuxNlTqXwtkJ6M13XzgqC2e0NXogC7mN0SLchUIs2H9QeZWry6AiJ+ xGM544Q9ns6QtYG/+Qn4Ebc1I47ZS8LZIp4uFFhz9Qp8CYWMCQHg6aoFIz+L+YRGyqtCDs p9Bd/7fCGIfbhErDOjnx9NcEVUM7Zx8= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=GTv2b9s7; dmarc=none; spf=pass (imf13.hostedemail.com: domain of gourry@gourry.net designates 209.85.160.177 as permitted sender) smtp.mailfrom=gourry@gourry.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732724922; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7qVD2IQ6TUVnIPl8KOFrC6kDkrJSATgnAz7c/+Cb6G4=; b=lSC4LeLcWPEDbodF4mnXzIfbr4NSeIYwss1zE7hYoJ8zqvpRnLRRnkScNxf48aH8Sx04G8 ZE1BlFUHJnroEe0rNA1Ev+7m/CjNH6p6j7a3K9pB72gJwVbUoRgcbR6jaHSKXpEFZXJp+A zyXyBuYLW89A0Y5Vh1EbrZsuB1x3b7E= Received: by mail-qt1-f177.google.com with SMTP id d75a77b69052e-46375ac25fbso50845861cf.0 for ; Wed, 27 Nov 2024 08:28:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gourry.net; s=google; t=1732724927; x=1733329727; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7qVD2IQ6TUVnIPl8KOFrC6kDkrJSATgnAz7c/+Cb6G4=; b=GTv2b9s7Y3ERlqz/98zhdUBDNEPpX72csiPwZClV8KCrGsMoCAVnVQkHFB9MzAlgn4 uN3bx7/WeqmWVmx+OvFG289H5W923ZxSElyGaLZKfSEmcrb2ShTt7sfmOu8jGVcrRGkJ ZHxX1aqS+G36OXcrLoyC28GggnQbSItHvU8/hVcmZW/+CD0TCgUu7sBCJyWC11UQa85X qYy6PFeaZ89827OF4qEHdVRJb/GepVsga4c5kUpRTLrmEKIlKw+ESIFoiBjb72X41dpp OlHafZgf87/yYvkKBcv8PNpL0ELf8OZyDs8T4iiSkHY0BFfcGmmt6abbrqjjqV/rh7ny Qn/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732724927; x=1733329727; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7qVD2IQ6TUVnIPl8KOFrC6kDkrJSATgnAz7c/+Cb6G4=; b=HiO4JnDsLo+534SHRBhbpVJXJOlck2asmqIsLq8JP1apC3+iNUQ/FoJR8yxLHLyRF0 9Onz8/LRB8ITLv/Ptu7uJLo0p+/itx2zE5xkgOmv8JQfqvqHNXqXPwBv4dwi/Zv6+Grd nph4WC/F+HpKrg7/5fv1htCE+on3Y6I8RNpq/HcTewJjv2rnY1bQkDJvITY1l3gf8uLI 9M5RjS5daJKWLyhfTdXP06Plrm/vsVtaj7GMbSiDrMXjRQqQL+4ceB3FTYGb/AAhZNFn qYLJIiLPDM+4g5BaWCaNUxDdcxE3XJFbt07Rq+DPrhXVsexsBbB41AdJpUh3fD1cpDWV MiXA== X-Gm-Message-State: AOJu0Yw86W/8i/6ssu+lovQh8xqyTEtkM/RY7FXHrJsHY9SI3pcGWMtp evXZewxlTlPlWL2fo3wBrXA43gLyxgxKxFmMIY2sm0wjETWbcSEZcqgJGiDRPxBNhSNX8SStJbI p X-Gm-Gg: ASbGncsgP+lnP24vgK/xezYBGNIC4tTvkDMvTHQRVUBNc3jymossoarCk2j/oLY1Lsr 3rhkuJXuVps5Y3AmB+2h1oZVsZ0BvfVLP7z7TsitRHV8cb+oRLV24kjLyG9r8yO3Eg7OAMZ1Wft 1dtg0PWw0NBELmpLrejhQmYkQbd2DkexxV3juT/uSkm6AIIKwyAzgSxDo8/u2Wu8A/HA7R65z9v FQy11jyTY61a88rw1fpV9qgiEnVtpP5k3cZ+/11y605MWCsuKIbsbE0Dl4QWYruq2r60aIkgmP+ m4TEbDjSTfTqmmdCO4lRUZ1sm3KMXNPnECk= X-Google-Smtp-Source: AGHT+IG/ToEOt1Gt6Ry1TUTbihCqZ0qO6hIYx2iJN602ReOwj0JgnK+caF+0walmHaDLqNcTTiZU1A== X-Received: by 2002:ac8:5811:0:b0:460:e9d3:e989 with SMTP id d75a77b69052e-466b34d31c1mr44603091cf.8.1732724927143; Wed, 27 Nov 2024 08:28:47 -0800 (PST) Received: from PC2K9PVX.TheFacebook.com (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-466871721c2sm45002921cf.17.2024.11.27.08.28.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 08:28:46 -0800 (PST) From: Gregory Price To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, nehagholkar@meta.com, abhishekd@meta.com, kernel-team@meta.com, david@redhat.com, ying.huang@intel.com, nphamcs@gmail.com, gourry@gourry.net, akpm@linux-foundation.org, hannes@cmpxchg.org, feng.tang@intel.com, kbusch@meta.com Subject: [PATCH 3/4] vmstat: add page-cache numa hints Date: Wed, 27 Nov 2024 03:22:00 -0500 Message-ID: <20241127082201.1276-4-gourry@gourry.net> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241127082201.1276-1-gourry@gourry.net> References: <20241127082201.1276-1-gourry@gourry.net> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7BD6B20019 X-Stat-Signature: e5rhnbdomy8k8sqeet1mn1mzwq3647ma X-Rspam-User: X-HE-Tag: 1732724919-594327 X-HE-Meta: U2FsdGVkX19ql2+/+n8uz6LSV/H+uSzLBMQPuxc85VICnXH7Ri24dS3iscobwUokn+ji1WTNVjE/rdK4MOBlbMM6M9cmr3lAVVKbUAbSQoHtl2tLzq1YBnOjho3ACpng51I6s/hxM7vqxq81qrjKdVixDqAcSJKcBGhC5Nic7bgd2we0cdmn4CunEIxWf8YT+bRKQL9OL5aUKyS4yyEb6wNaB/ZkXcf1jV0I8xOhpIXThyAaxeRX2E8TiJwV/xuzpyhd3mCHpRrdUQxswbCdxt8PvzTYt/JImTHmmv29BHN5aUpBx9s1vOyRVTzmN1ueJdztIxhBITIzE2qbg4i0CaW1sH6fEyoIegfwjIqtz0n0LM58DNBcRt9SoyvVdBNo/P7TChIAtLzU6rml4jbOFl7KKdWPN/oo4ZwW4FzPN4sd7u5DJq9wVnkzjIiOGWflyPrE9ZhjDXPzfsQNy7vDTTtKXab+CReLogsVq3pM4ZinKKikwtViRMpu6/1ot1TVse0hf+cXtKGLpNeyhC2pD0pwujAxkV5nNqLqMVNrcicHO02bz3WOLURWC6mMCisLLyIuZR7BjIfAaMlBTrL+kjOu8Rivnwv/gQkZCTOcXjWQwcMYkPluotZ+xGGHe11IjjTSchooMU3OoG28b6UcmcD2bdgaadXPnzJl8/H7uMWJapo72NnKferqIxuyVV+m8UC4/BkqEMfeW1Zz0i8ax71Ongh5vk1Op14F5NyoE9P1uzlngWCY5gvRpU3rYv9n+eHDJKapMDQS5abnzA0zqxaGRQShgxQW+NKY6Y+akMJR7m3PPZnkM2r9xRFQINDme4A8C50yaVeYYGFWfZCoKWz6ax2UrEeymW4fmOiOByuwY+hswlVWwwlfrvPxlMCud/a1XGoyZu0i2cXLypRCtJcuHrJK8WWR1m8HRfgdi7RgZ893h0s+pj7gYHd3u5SFvYQ6pv/zvYXj7wt7p1z Jvg/mLp1 dG95nzWdjuN6dFiPxcUoEnXHJV8zoW2eshF+jY4uXtjEljgx6boXgtEnw0nWKS2dJZdvKV2kPZ8mpCLkYl21p19l4jU2sh1s1gp7k+xmN09KB3YEWB3E8otPG7RY4uBe1sT9tGUDlDk6YHsSA1kPTOj+6TlNs22eLQGodGhezpEacAA6L1yZofl8SpMXfukUdpmxz54L1sxt3ZTivT80snYovZC8GwAMtgQZzKYkJulAJbVs5495OO8wkZY8onkB/3mtN5JB9zttt6Iy4h1FwoRJtiZBOtAILJuGHEgr5xN8fOt7xvtRPp5RFuw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Count non-page-fault events as page-cache numa hints instead of fault hints in vmstat. Signed-off-by: Gregory Price --- include/linux/vm_event_item.h | 2 ++ mm/memory.c | 15 ++++++++++----- mm/vmstat.c | 2 ++ 3 files changed, 14 insertions(+), 5 deletions(-) diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index f70d0958095c..9fee15d9ba48 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -63,6 +63,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, NUMA_HUGE_PTE_UPDATES, NUMA_HINT_FAULTS, NUMA_HINT_FAULTS_LOCAL, + NUMA_HINT_PAGE_CACHE, + NUMA_HINT_PAGE_CACHE_LOCAL, NUMA_PAGE_MIGRATE, #endif #ifdef CONFIG_MIGRATION diff --git a/mm/memory.c b/mm/memory.c index a373b6ad0b34..35b72a1cfbd5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5507,11 +5507,16 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, *last_cpupid = folio_last_cpupid(folio); #ifdef CONFIG_NUMA_BALANCING - count_vm_numa_event(NUMA_HINT_FAULTS); - count_memcg_folio_events(folio, NUMA_HINT_FAULTS, 1); - if (folio_nid(folio) == numa_node_id()) { - count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL); - *flags |= TNF_FAULT_LOCAL; + if (vmf) { + count_vm_numa_event(NUMA_HINT_FAULTS); + count_memcg_folio_events(folio, NUMA_HINT_FAULTS, 1); + if (folio_nid(folio) == numa_node_id()) { + count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL); + *flags |= TNF_FAULT_LOCAL; + } + } else { + count_vm_numa_event(NUMA_HINT_PAGE_CACHE); + count_memcg_folio_events(folio, NUMA_HINT_PAGE_CACHE, 1); } #endif diff --git a/mm/vmstat.c b/mm/vmstat.c index 4d016314a56c..bcd9be11e957 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1338,6 +1338,8 @@ const char * const vmstat_text[] = { "numa_huge_pte_updates", "numa_hint_faults", "numa_hint_faults_local", + "numa_hint_page_cache", + "numa_hint_page_cache_local", "numa_pages_migrated", #endif #ifdef CONFIG_MIGRATION From patchwork Wed Nov 27 08:22:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13887195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9B6FD6ACF2 for ; Wed, 27 Nov 2024 16:28:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 69AA86B0096; Wed, 27 Nov 2024 11:28:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 64C686B0098; Wed, 27 Nov 2024 11:28:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49FFC6B0099; Wed, 27 Nov 2024 11:28:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2440F6B0096 for ; Wed, 27 Nov 2024 11:28:55 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 984D241828 for ; Wed, 27 Nov 2024 16:28:54 +0000 (UTC) X-FDA: 82832408820.26.FB6F061 Received: from mail-yb1-f173.google.com (mail-yb1-f173.google.com [209.85.219.173]) by imf19.hostedemail.com (Postfix) with ESMTP id A7A8B1A001E for ; Wed, 27 Nov 2024 16:28:45 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=MpGdB6I7; dmarc=none; spf=pass (imf19.hostedemail.com: domain of gourry@gourry.net designates 209.85.219.173 as permitted sender) smtp.mailfrom=gourry@gourry.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732724927; a=rsa-sha256; cv=none; b=m/bFZvqB1dGc/k8uH6e4SuRf7i5sNv+A01iSmrgWEVa4QyoMTap//D2efcuqaHPu/CVx85 3cCQETLumxFb1Bi3yapyzfE1twIGz0NS/JYqZI3lfoEvGRM1eC5F2xQW9pfKWjMb4sejkQ Fm7tw/CG7UKdthcornA8pDkCFfIthd4= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=MpGdB6I7; dmarc=none; spf=pass (imf19.hostedemail.com: domain of gourry@gourry.net designates 209.85.219.173 as permitted sender) smtp.mailfrom=gourry@gourry.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732724927; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GlbopFyhIte0FJS10Nwl9pZw748om4b0DMwVNmVhyf8=; b=NhhbYxl9rtJms7ECBFbCuArEbhA1j6CSNhfOgoe0L9ue5SQDKNfoavhyyLLe9XFh6JcvUf 3WbgzQ4XlQonCczAIfVonzsGeOnDrSxsgJYuD+bO3N2wnGe6rotc75+4GjsZBIIMPx4PRF 5yPVNLGYAcX9uv9h08T/CDC0vAHIdE0= Received: by mail-yb1-f173.google.com with SMTP id 3f1490d57ef6-e396c98af45so312663276.1 for ; Wed, 27 Nov 2024 08:28:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gourry.net; s=google; t=1732724931; x=1733329731; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GlbopFyhIte0FJS10Nwl9pZw748om4b0DMwVNmVhyf8=; b=MpGdB6I7Dzch+KUbUUoqjS/E0nvHwr4RiXA7m2Awmfaghr2rgzTW0n1PSK2lmjW58t 4N7bYF7ct8kxWioA516ajaUzuz6DDWirhrH4AVpOm/EJUiPtS8QYJ/naWcCotPh30UUq vb7XHP9dqMcnligMCunBVovcSMmPSb/yS0TuiQzpCswBU9d+U2GCu7RU7rTdAkbzCh/m hyvESU46eAOGJIWBJTMJ7emPKBG+ewIJsaOf9AfN5qMVyuKgxVLPQgjrU6SkoILZWwsL 8gNBwXsZiQgzk9v2Y1MYTymh2iQa4Ltg07+IJOZfS8Xhu3LOFd85PRN2ypZTIp9Fjemx u8Ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732724931; x=1733329731; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GlbopFyhIte0FJS10Nwl9pZw748om4b0DMwVNmVhyf8=; b=PIFfZfqbuzaRmVZ5AFHP67pW8vnjqOeZ/I1N9dCzp9cw9zc6ew0P423pNezJ5vmxRU GyYKBbFvDmwlTw7iuT2Eb4UPMVaezZErfJY9vh+xnYn2SpJVVYUZ7398OBcozH8jVtJE HjBeD9N0FCTMr3qiJCXX1k9+XSV57P18CwTx8gnQGec9Z61xjvpyipLtlVvCTEBUfth7 GxEDOO8FXch6qZ21p5NmSn9CCtFn4+oDj5fboTw09OSC2MkG4UCwN04Yst31Em2J4lqF rTHpYV+hvdCykZysOLv0MfjjC5hbx1/QftakOruPTUEsGtmVbiAacCE9s92TlAh88acA aopw== X-Gm-Message-State: AOJu0YzJoNgIV+Fonj54GBNqJz/fdlFWNNv5CpSGVUP7kivY/D2Q5kzF Sr9J95xpsKem/i9GWj/EzQ4ksuxdlWc+hn7eNbUVE81sYdhHpZ5KinanoPpohJE39hueSaKk7Vi 5 X-Gm-Gg: ASbGncvCm8V0/sh7uCoZGyJNSJnbBhcplgYHJeR+/PwPuxaQs8JtqUNTOQgghjtJqHe suppnnGTKdsBDcA+ThFZbBN+8nutdBoffuOO0T5qw5EB7FWYE7PdH+4Qd3SqvlA/LZ/b6Zos9i+ mBJuif31P4eEVm3KY15Ee+oJaPHBO+Cp5OQy0OH8FP0CLi8R2pQ+PHXx+Zt4dY3wKKn7C+1ph+q lzo3+9jEUfHAtxAGmigNNbeCq49LggPCKkyri5JZELTmwvEg26Cs/Xzpo7kKggYMDmmgqDEBsn1 equAwEJ5d9xiHpE0yUR4OIMIZl2NYu2R2Gc= X-Google-Smtp-Source: AGHT+IETWCkWSJX84MryQbzWHKT6Q0hr6mgUPEZosVDmCXxxGPnFDOtDGJLEv5u3stNIzDvnJgD8kg== X-Received: by 2002:a05:6902:705:b0:e38:901b:602d with SMTP id 3f1490d57ef6-e395b889348mr3092292276.9.1732724931460; Wed, 27 Nov 2024 08:28:51 -0800 (PST) Received: from PC2K9PVX.TheFacebook.com (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-466871721c2sm45002921cf.17.2024.11.27.08.28.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 08:28:51 -0800 (PST) From: Gregory Price To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, nehagholkar@meta.com, abhishekd@meta.com, kernel-team@meta.com, david@redhat.com, ying.huang@intel.com, nphamcs@gmail.com, gourry@gourry.net, akpm@linux-foundation.org, hannes@cmpxchg.org, feng.tang@intel.com, kbusch@meta.com Subject: [PATCH 4/4] migrate,sysfs: add pagecache promotion Date: Wed, 27 Nov 2024 03:22:01 -0500 Message-ID: <20241127082201.1276-5-gourry@gourry.net> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241127082201.1276-1-gourry@gourry.net> References: <20241127082201.1276-1-gourry@gourry.net> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: A7A8B1A001E X-Stat-Signature: xkczynu9wirhq69q8nmzgxz5hxng6534 X-Rspam-User: X-HE-Tag: 1732724925-56151 X-HE-Meta: U2FsdGVkX19A4qH0ZsEkvIFKlQuc3Qacj5rmqR3HGyBmLc3ST7K0zghssDhvxuh9CuEsbiCwfD6ZQL7oe+mUmtFreXwnIkqGp/TVDDtNjuXChCDu68ItSxJxXfisEiXmK8oUMmrBzbkdUrVSF7psjWgLSC42P6/3H0sCBWQl3qCSkT+qbQrDBnxOjHseJyW9Ap5LkKS48xL81wIOWj9SoPSyDOHYcDYcLtkJIlyNrk/w+dGCAwSMQPKJfzbu447QOs1hScqZiQPZcvcihIvgss2T7Dr2M4qhXGG2CuU2s+1GGugJ96Qvzpu6VeJVyPrw0aUDy+MOtRHi38LssfHTtrgoHvq2LUPWbEeqUCQXVjNjwsf9UczO8dobDgYsR4FydReP7Q8gA9FNk3zmrTw7P7+xRjVnXaNuTjsMZ8pHYdZ4Tk1wrCMuTGT6MjjNRgioDwLY40z3jb9yJCq3OhPtYdQ3UU2FbeSOqDPRDAj0RcswQsq4wDMk50CeIJvwAKiFmmxvjJs0kF0bqFRCdmNNEOV4NZ2wS8E0P+tGswgGDLWh7AhV4tlDPJiuj8gamso3WYINntqCJ/+2T5A2B9Ap9ZZH4TMB9D8bb3+vyCOerGYz0wd9f/uGGtnHv8dWsY93OwDAcx9C6B8FShATUEoX/txIrtIWT3FCcZE0XbF1F9xmfSPMmwDAaWCbk+6qEjtB1DWBqOuiEZoJ+EMlx2mhwE13DGHjR8eqY3Cy7PI++A/sOPJOqjQBe3EVUwS1DMXyhjzMtEeT8RBeX6N9EoRjNbjDdUn5TKhQSB3fhmEmTgfwCUYpk+3Dnhbrys/00BthsFNMapqZgnYBof+swvSGF5AFsAjvee/5l/idZIcqcT+oI/9LazAwz0TqxuM4HbFSWLq/za4raA3zhwcDChkrNeLYlYwvk+WrgLpo7fvhL0jrbScKfCub+Dx6chUsng17pC1iTUkDFLuFchjTvhL xOTZAVLy AgyYA9JMLg/XAuWb/kXUqnfkCFHN1nrkwFzWymERJehYatTTAmI9jflcNrqg/tvw6wl1obHUh4jDjXhwyKsDLwIzsPcN9M58vM0O0//2n1DX2r/RrbLr/phYuC9+ORlYFcrWfwMux+3YgrXm7770AtqY51rt4QAMCjpMGUbFphxgPzkciTclVZfTxY+PQca99U7pDxE2eVUkrV3GePHbORCYoiLWajP0EjBlA+RBYybuq8ihruPiEqolFzdAGeCEgrmipWlSX5P4xNEsje7pURtqEJYs5Z0X1O58BtOGmvoqAiD0s1qLNqYmVaphQrhY5PQtxNO9i4aMSmoKEFOKzKULI5ryTKeZ579639NXCSoQBcI8rrZ/adrwUacvqf3IgdNXNh+t0uHNuC7L9X0BEGmu4j9klUyMNtPZVi9iRGVA2yvM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: adds /sys/kernel/mm/numa/pagecache_promotion_enabled When page cache lands on lower tiers, there is no way for promotion to occur unless it becomes memory-mapped and exposed to NUMA hint faults. Just adding a mechanism to promote pages unconditionally, however, opens up significant possibility of performance regressions. Similar to the `demotion_enabled` sysfs entry, provide a sysfs toggle to enable and disable page cache promotion. This option will enable opportunistic promotion of unmapped page cache during syscall access. This option is intended for operational conditions where demoted page cache will eventually contain memory which becomes hot - and where said memory likely to cause performance issues due to being trapped on the lower tier of memory. A Page Cache folio is considered a promotion candidates when: 0) tiering and pagecache-promotion are enabled 1) the folio reside on a node not in the top tier 2) the folio is already marked referenced and active. 3) Multiple accesses in (referenced & active) state occur quickly. Since promotion is not safe to execute unconditionally from within folio_mark_accessed, we defer promotion to a new task_work captured in the task_struct. This ensures that the task doing the access has some hand in promoting pages - even among deduplicated read only files. We use numa_hint_fault_latency to help identify when a folio is accessed multiple times in a short period. Along with folio flag checks, this helps us minimize promoting pages on the first few accesses. The promotion node is always the local node of the promoting cpu. Suggested-by: Johannes Weiner Signed-off-by: Gregory Price --- .../ABI/testing/sysfs-kernel-mm-numa | 20 +++++++ include/linux/memory-tiers.h | 2 + include/linux/migrate.h | 4 ++ include/linux/sched.h | 3 + include/linux/sched/numa_balancing.h | 5 ++ init/init_task.c | 1 + kernel/sched/fair.c | 26 ++++++++- mm/memory-tiers.c | 27 +++++++++ mm/migrate.c | 56 +++++++++++++++++++ mm/swap.c | 3 + 10 files changed, 146 insertions(+), 1 deletion(-) diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-numa b/Documentation/ABI/testing/sysfs-kernel-mm-numa index 77e559d4ed80..b846e7d80cba 100644 --- a/Documentation/ABI/testing/sysfs-kernel-mm-numa +++ b/Documentation/ABI/testing/sysfs-kernel-mm-numa @@ -22,3 +22,23 @@ Description: Enable/disable demoting pages during reclaim the guarantees of cpusets. This should not be enabled on systems which need strict cpuset location guarantees. + +What: /sys/kernel/mm/numa/pagecache_promotion_enabled +Date: November 2024 +Contact: Linux memory management mailing list +Description: Enable/disable promoting pages during file access + + Page migration during file access is intended for systems + with tiered memory configurations that have significant + unmapped file cache usage. By default, file cache memory + on slower tiers will not be opportunistically promoted by + normal NUMA hint faults, because the system has no way to + track them. This option enables opportunistic promotion + of pages that are accessed via syscall (e.g. read/write) + if multiple accesses occur in quick succession. + + It may move data to a NUMA node that does not fall into + the cpuset of the allocating process which might be + construed to violate the guarantees of cpusets. This + should not be enabled on systems which need strict cpuset + location guarantees. diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h index 0dc0cf2863e2..fa96a67b8996 100644 --- a/include/linux/memory-tiers.h +++ b/include/linux/memory-tiers.h @@ -37,6 +37,7 @@ struct access_coordinate; #ifdef CONFIG_NUMA extern bool numa_demotion_enabled; +extern bool numa_pagecache_promotion_enabled; extern struct memory_dev_type *default_dram_type; extern nodemask_t default_dram_nodes; struct memory_dev_type *alloc_memory_type(int adistance); @@ -76,6 +77,7 @@ static inline bool node_is_toptier(int node) #else #define numa_demotion_enabled false +#define numa_pagecache_promotion_enabled false #define default_dram_type NULL #define default_dram_nodes NODE_MASK_NONE /* diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 002e49b2ebd9..c288c16b1311 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -146,6 +146,7 @@ int migrate_misplaced_folio_prepare(struct folio *folio, struct vm_area_struct *vma, int node); int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma, int node); +void promotion_candidate(struct folio *folio); #else static inline int migrate_misplaced_folio_prepare(struct folio *folio, struct vm_area_struct *vma, int node) @@ -157,6 +158,9 @@ static inline int migrate_misplaced_folio(struct folio *folio, { return -EAGAIN; /* can't migrate now */ } +static inline void promotion_candidate(struct folio *folio) +{ +} #endif /* CONFIG_NUMA_BALANCING */ #ifdef CONFIG_MIGRATION diff --git a/include/linux/sched.h b/include/linux/sched.h index bb343136ddd0..8ddd4986e57f 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1353,6 +1353,9 @@ struct task_struct { unsigned long numa_faults_locality[3]; unsigned long numa_pages_migrated; + + struct callback_head numa_promo_work; + struct list_head promo_list; #endif /* CONFIG_NUMA_BALANCING */ #ifdef CONFIG_RSEQ diff --git a/include/linux/sched/numa_balancing.h b/include/linux/sched/numa_balancing.h index 52b22c5c396d..cc7750d754ff 100644 --- a/include/linux/sched/numa_balancing.h +++ b/include/linux/sched/numa_balancing.h @@ -32,6 +32,7 @@ extern void set_numabalancing_state(bool enabled); extern void task_numa_free(struct task_struct *p, bool final); bool should_numa_migrate_memory(struct task_struct *p, struct folio *folio, int src_nid, int dst_cpu); +int numa_hint_fault_latency(struct folio *folio); #else static inline void task_numa_fault(int last_node, int node, int pages, int flags) @@ -52,6 +53,10 @@ static inline bool should_numa_migrate_memory(struct task_struct *p, { return true; } +static inline int numa_hint_fault_latency(struct folio *folio) +{ + return 0; +} #endif #endif /* _LINUX_SCHED_NUMA_BALANCING_H */ diff --git a/init/init_task.c b/init/init_task.c index 136a8231355a..ee33e508067e 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -186,6 +186,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = { .numa_preferred_nid = NUMA_NO_NODE, .numa_group = NULL, .numa_faults = NULL, + .promo_list = LIST_HEAD_INIT(init_task.promo_list), #endif #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) .kasan_depth = 1, diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2d16c8545c71..34d66faa50f9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -42,6 +42,7 @@ #include #include #include +#include #include #include #include @@ -1842,7 +1843,7 @@ static bool pgdat_free_space_enough(struct pglist_data *pgdat) * The smaller the hint page fault latency, the higher the possibility * for the page to be hot. */ -static int numa_hint_fault_latency(struct folio *folio) +int numa_hint_fault_latency(struct folio *folio) { int last_time, time; @@ -3528,6 +3529,27 @@ static void task_numa_work(struct callback_head *work) } } +static void task_numa_promotion_work(struct callback_head *work) +{ + struct task_struct *p = current; + struct list_head *promo_list = &p->promo_list; + struct folio *folio, *tmp; + int nid = numa_node_id(); + + SCHED_WARN_ON(p != container_of(work, struct task_struct, numa_promo_work)); + + work->next = work; + + if (list_empty(promo_list)) + return; + + list_for_each_entry_safe(folio, tmp, promo_list, lru) { + list_del_init(&folio->lru); + migrate_misplaced_folio(folio, NULL, nid); + } +} + + void init_numa_balancing(unsigned long clone_flags, struct task_struct *p) { int mm_users = 0; @@ -3552,8 +3574,10 @@ void init_numa_balancing(unsigned long clone_flags, struct task_struct *p) RCU_INIT_POINTER(p->numa_group, NULL); p->last_task_numa_placement = 0; p->last_sum_exec_runtime = 0; + INIT_LIST_HEAD(&p->promo_list); init_task_work(&p->numa_work, task_numa_work); + init_task_work(&p->numa_promo_work, task_numa_promotion_work); /* New address space, reset the preferred nid */ if (!(clone_flags & CLONE_VM)) { diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c index fc14fe53e9b7..4c44598e485e 100644 --- a/mm/memory-tiers.c +++ b/mm/memory-tiers.c @@ -935,6 +935,7 @@ static int __init memory_tier_init(void) subsys_initcall(memory_tier_init); bool numa_demotion_enabled = false; +bool numa_pagecache_promotion_enabled; #ifdef CONFIG_MIGRATION #ifdef CONFIG_SYSFS @@ -957,11 +958,37 @@ static ssize_t demotion_enabled_store(struct kobject *kobj, return count; } +static ssize_t pagecache_promotion_enabled_show(struct kobject *kobj, + struct kobj_attribute *attr, + char *buf) +{ + return sysfs_emit(buf, "%s\n", + numa_pagecache_promotion_enabled ? "true" : "false"); +} + +static ssize_t pagecache_promotion_enabled_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t count) +{ + ssize_t ret; + + ret = kstrtobool(buf, &numa_pagecache_promotion_enabled); + if (ret) + return ret; + + return count; +} + + static struct kobj_attribute numa_demotion_enabled_attr = __ATTR_RW(demotion_enabled); +static struct kobj_attribute numa_pagecache_promotion_enabled_attr = + __ATTR_RW(pagecache_promotion_enabled); + static struct attribute *numa_attrs[] = { &numa_demotion_enabled_attr.attr, + &numa_pagecache_promotion_enabled_attr.attr, NULL, }; diff --git a/mm/migrate.c b/mm/migrate.c index 3b0bd3f21ac3..2cd9faed6ab8 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -44,6 +44,8 @@ #include #include #include +#include +#include #include @@ -2711,5 +2713,59 @@ int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma, BUG_ON(!list_empty(&migratepages)); return nr_remaining ? -EAGAIN : 0; } + +/** + * promotion_candidate() - report a promotion candidate folio + * + * @folio: The folio reported as a candidate + * + * Records folio access time and places the folio on the task promotion list + * if access time is less than the threshold. The folio will be isolated from + * LRU if selected, and task_work will putback the folio on promotion failure. + * + * Takes a folio reference that will be released in task work. + */ +void promotion_candidate(struct folio *folio) +{ + struct task_struct *task = current; + struct list_head *promo_list = &task->promo_list; + struct callback_head *work = &task->numa_promo_work; + struct address_space *mapping = folio_mapping(folio); + bool write = mapping ? mapping->gfp_mask & __GFP_WRITE : false; + int nid = folio_nid(folio); + int flags, last_cpupid; + + /* + * Only do this work if: + * 1) tiering and pagecache promotion are enabled + * 2) the page can actually be promoted + * 3) The hint-fault latency is relatively hot + * 4) the folio is not already isolated + * 5) This is not a kernel thread context + */ + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) || + !numa_pagecache_promotion_enabled || + node_is_toptier(nid) || + numa_hint_fault_latency(folio) >= PAGE_ACCESS_TIME_MASK || + folio_test_isolated(folio) || + (current->flags & PF_KTHREAD)) { + return; + } + + nid = numa_migrate_check(folio, NULL, 0, &flags, write, &last_cpupid); + if (nid == NUMA_NO_NODE) + return; + + if (migrate_misplaced_folio_prepare(folio, NULL, nid)) + return; + + /* Ensure task can schedule work, otherwise we'll leak folios */ + if (list_empty(promo_list) && task_work_add(task, work, TWA_RESUME)) { + folio_putback_lru(folio); + return; + } + list_add(&folio->lru, promo_list); +} +EXPORT_SYMBOL(promotion_candidate); #endif /* CONFIG_NUMA_BALANCING */ #endif /* CONFIG_NUMA */ diff --git a/mm/swap.c b/mm/swap.c index 10decd9dffa1..9cf4c1f73fe5 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -37,6 +37,7 @@ #include #include #include +#include #include "internal.h" @@ -453,6 +454,8 @@ void folio_mark_accessed(struct folio *folio) __lru_cache_activate_folio(folio); folio_clear_referenced(folio); workingset_activation(folio); + } else { + promotion_candidate(folio); } if (folio_test_idle(folio)) folio_clear_idle(folio);