From patchwork Thu Jan 30 10:00:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13954414 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B6CE1A9B32; Thu, 30 Jan 2025 10:01:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231275; cv=none; b=kYpjeL6JD2uAsF08j1yRxjnXFsejPR2sBvQUk9yAtH/5poE/f/VJf4jIC9Du/FHV/SDW0h3BFsNCSZGuQgo3mY9fCewLXaCNxcGbk4qKYn7Vc4LlWx0i07aWi3yKbcTlrkxx1LO66lS1ujWKHt7XpZPa21+uiuOv95umLUsG4e4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231275; c=relaxed/simple; bh=Q/Nr0FCslCBPkatRSbZ2MtNH6BxbvkzrecnK5J+1uq0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QBVTj9pVVEYq1dvhaGFo1Jhpov+31YebyTk4mZNuWD8PQBlfzNnVA6OF3L3O9dJQ+fkXU+86XoTLmw+HMOj0HBjl93ZeztnciP1HlBMMQcR3FR+AQ2RBK1l9w3x6L59tzztruXcPGCsAwNu9TzIHm7NUFm643Q9aKz1a/LxeayY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=g0bKsKrT; arc=none smtp.client-ip=198.175.65.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="g0bKsKrT" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738231275; x=1769767275; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Q/Nr0FCslCBPkatRSbZ2MtNH6BxbvkzrecnK5J+1uq0=; b=g0bKsKrT+z6iUpyPJbRDO2B38WAR72RQmEEYIlp7aH+AOjYxRhH80Cr+ JyL3EWEgxX5jzYV344POt+CCnTst8+IsR4zgqUcDoGGcN/+tyovw/z3Mb 59sKK/JvddP8ak8SVCHUD1//ckEjv7J3c+uELYt1db7KA6bIaXnAgobcw wtYvlSvpe0UuorsCpYuR9UJLUJ0Hja2H6CTsfSjCqEgFJhXjG8YdtrCSY 85kqCQ3cesSW/I2ynKu+gjWftZlowEiIpaoBk1kgR5BTSG2aWIl7ionYB 1IxxKxyW9uPXRCYs1LfnzYl45IqQXqtewZfT/PSHTdsLUajjstPWkF45r A==; X-CSE-ConnectionGUID: Xe9L43AGRCaq2sP3etLZbA== X-CSE-MsgGUID: Cam8FogPSc2ImuAHDPo+wA== X-IronPort-AV: E=McAfee;i="6700,10204,11330"; a="61239603" X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="61239603" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2025 02:01:11 -0800 X-CSE-ConnectionGUID: RZsrIEjITd+zqO2YE5xN3Q== X-CSE-MsgGUID: qAa5pEj3SVKLKkZqz6wWCg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="110191861" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa008.jf.intel.com with ESMTP; 30 Jan 2025 02:01:02 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 94ED2DC; Thu, 30 Jan 2025 12:01:01 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv3 01/11] mm/migrate: Transfer PG_dropbehind to the new folio Date: Thu, 30 Jan 2025 12:00:39 +0200 Message-ID: <20250130100050.1868208-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> References: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Do not lose the flag on page migration. Ideally, these folios should be freed instead of migration. But it requires to find right spot do this and proper testing. Transfer the flag for now. Signed-off-by: Kirill A. Shutemov Acked-by: Yu Zhao --- mm/migrate.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/migrate.c b/mm/migrate.c index fb19a18892c8..1fb0698273f7 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -682,6 +682,10 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) if (folio_test_dirty(folio)) folio_set_dirty(newfolio); + /* TODO: free the folio on migration? */ + if (folio_test_dropbehind(folio)) + folio_set_dropbehind(newfolio); + if (folio_test_young(folio)) folio_set_young(newfolio); if (folio_test_idle(folio)) From patchwork Thu Jan 30 10:00:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13954413 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E754179BC; Thu, 30 Jan 2025 10:01:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231274; cv=none; b=NjlmNNUexJCkXbiIQQp9m84OqUk1pTrdPRN2vzCVP0UKDsSZJ3Ajii3eqpbL9CP0zbisFUJkzwLj5G/0XfSrr4zT/vPZpqVNLEEKEGVimwYHKMw2FvfLhjpX5ayjKga4QrIOdNiDRIzcsgCcUjfcU0K4JYAsjI0poK4cDcJu+Zw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231274; c=relaxed/simple; bh=8pRNuIPrVfxTldXbqeV9RRpp7y6kXHtiTaNIrlWjic8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DkrLml9uRmh4VAf/CLDI+5LkTwUL124fIcupq+7yok+tOv0KTVYyi6R9rtCz2Hkmj1Nugx1sHda0mB1B08Ii3y8R12Ib08SJMsmYlOE+JHaF2sZ6YiXi5VlICKrFg6vFDz/i6DNCa1Z2gBSCzgJIwCv62mmLg/UZ8/ns4n6pYMY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=KtCybaXU; arc=none smtp.client-ip=198.175.65.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KtCybaXU" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738231273; x=1769767273; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8pRNuIPrVfxTldXbqeV9RRpp7y6kXHtiTaNIrlWjic8=; b=KtCybaXUItNBJFne6ZqrpPQ0jITEgZmfNoyro+uGLyAlVnJqRfupR/8U KHv2Vpu014zJIX/Ez0vxmHr57Mkef3U6arAZAcdcN0UhWDkT0F49wBapa Wb2bliXzcMt8DibcjGgxcN04uBBv3b+d4074aOQvuT33qW9o/R2/od0aw LQKiClcQhIKWL1opi4z+gBOoKTXPVW8+8TyOua3t+dk+OBRhZ4R8yog9m 7edVXDCgOUnHBY/4SM/k8z/gDD4GmMYNpVQ6ZWuDDL9QlRy49HJXTk9xs fgfGDSTp6FZQSvOaxBQGmPvXImcWkiTdzvqXXYArV2S176ehthL016Nla A==; X-CSE-ConnectionGUID: DE+qLiYiSVyf1QRdhcoXkQ== X-CSE-MsgGUID: zvLe9saFSOOuAs1QUZHOAA== X-IronPort-AV: E=McAfee;i="6700,10204,11330"; a="61239559" X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="61239559" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2025 02:01:11 -0800 X-CSE-ConnectionGUID: F29W/Q2yTvOf48Et3LHI8g== X-CSE-MsgGUID: kabiex6lQQaaLPmeAbkFmw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="110191859" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa008.jf.intel.com with ESMTP; 30 Jan 2025 02:01:03 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id AA41E10A; Thu, 30 Jan 2025 12:01:01 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv3 02/11] drm/i915/gem: Convert __shmem_writeback() to folios Date: Thu, 30 Jan 2025 12:00:40 +0200 Message-ID: <20250130100050.1868208-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> References: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Use folios instead of pages. This is preparation for removing PG_reclaim. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index fe69f2c8527d..9016832b20fc 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -320,25 +320,25 @@ void __shmem_writeback(size_t size, struct address_space *mapping) /* Begin writeback on each dirty page */ for (i = 0; i < size >> PAGE_SHIFT; i++) { - struct page *page; + struct folio *folio; - page = find_lock_page(mapping, i); - if (!page) + folio = filemap_lock_folio(mapping, i); + if (!folio) continue; - if (!page_mapped(page) && clear_page_dirty_for_io(page)) { + if (!folio_mapped(folio) && folio_clear_dirty_for_io(folio)) { int ret; - SetPageReclaim(page); - ret = mapping->a_ops->writepage(page, &wbc); + folio_set_reclaim(folio); + ret = mapping->a_ops->writepage(&folio->page, &wbc); if (!PageWriteback(page)) - ClearPageReclaim(page); + folio_clear_reclaim(folio); if (!ret) goto put; } - unlock_page(page); + folio_unlock(folio); put: - put_page(page); + folio_put(folio); } } From patchwork Thu Jan 30 10:00:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13954415 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F3BEF1B85F6; Thu, 30 Jan 2025 10:01:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231276; cv=none; b=Q/hLBYJe0oO9yqWId980z57M+Ix+sXVQo1ISKiCUUQ1bi+xa1qwLJ4vmOrGEDSTqWbFxXHABiY4RkOsvtkWkmj425nFr027R31C/LJKdRcrmRJhZst1kblszszF5+WiEWrXzRT6y9zZHeu9sA7xDhMMgyKy2rvAO8APUtFzYJlU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231276; c=relaxed/simple; bh=TeTRrafXWXo8OYB1YltM6X/TOJfCtsW9VeKHwtoHqu0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=j3MZfaZRrsFS5bQnJWiKRa4mNQ+yllybmGVNmoeXf4nWeI+/NqRJgQTK9U9QomTjxMeDf7rFD5tKJ6TVRQa+HwMfZZ+D5iObLJGtz0is0mC2R4B2BD9DIGxaJdvwnGGjkObcZVHsifdpGT9r/Oe9LYtiqSSIV9UI38PohtKKz24= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=H5O74LWe; arc=none smtp.client-ip=198.175.65.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="H5O74LWe" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738231275; x=1769767275; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TeTRrafXWXo8OYB1YltM6X/TOJfCtsW9VeKHwtoHqu0=; b=H5O74LWeV2pMjcAeBrfHshism7dkWnEIf7nYnYrUjn10bIQFTtbIPQp6 nEp1121TKnEpgjnKHqkLb0navkFtjTCpFnUpLeK4DBDdjtoZG4q5gXsV2 Oog4T1P8LoKABS/1wD97X5b2DvScLUyCFVGLatPq29EBhMGmL2O4LBXBI ak5I26wruq4wdbIilEaH19tW/lrNYnwWL2sy5vrCAoGFsdtbAl0zhI5Kx tadmagtV6rUySCUfwZA/MlKdWrQdSkuBSA/L0Di9J1QNtdCAq679vGvM8 B0g97x5n0mG2ii41Q0N5etKPTQF+RV+jY5XzDZkAH6zAWxVe4HxrGKwnD w==; X-CSE-ConnectionGUID: 9Fqned7kTr+6bFVj7FXFzg== X-CSE-MsgGUID: YXDySL1eQOWh3fQqsdcUEg== X-IronPort-AV: E=McAfee;i="6700,10204,11330"; a="61239629" X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="61239629" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2025 02:01:12 -0800 X-CSE-ConnectionGUID: 06AmdIhLRoWLXSGTQo0ftQ== X-CSE-MsgGUID: VHYjk4kqSbWJHv+236BOBg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="110191864" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa008.jf.intel.com with ESMTP; 30 Jan 2025 02:01:03 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id BBDB1111; Thu, 30 Jan 2025 12:01:01 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv3 03/11] drm/i915/gem: Use PG_dropbehind instead of PG_reclaim Date: Thu, 30 Jan 2025 12:00:41 +0200 Message-ID: <20250130100050.1868208-4-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> References: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. Instead of using folio_set_reclaim(), use folio_set_dropbehind() in __shmem_writeback() It is safe to leave PG_dropbehind on the folio if, for some reason (bug?), the folio is not in a writeback state after ->writepage(). In these cases, the kernel had to clear PG_reclaim as it shared a page flag bit with PG_readahead. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 9016832b20fc..c1724847c001 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -329,10 +329,8 @@ void __shmem_writeback(size_t size, struct address_space *mapping) if (!folio_mapped(folio) && folio_clear_dirty_for_io(folio)) { int ret; - folio_set_reclaim(folio); + folio_set_dropbehind(folio); ret = mapping->a_ops->writepage(&folio->page, &wbc); - if (!PageWriteback(page)) - folio_clear_reclaim(folio); if (!ret) goto put; } From patchwork Thu Jan 30 10:00:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13954423 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 399A61B87F8; Thu, 30 Jan 2025 10:02:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231332; cv=none; b=POtBIoEd/H6iSCYy3DxrpC88MjuUEOos9zrSD2tMxAOS5St1kRvo+l2PdyZIhU0ppPUNNzBvJ2deQ2lMBMHM6DZXrblSkteVFxZJB2b/m6eRuhGNp0rSmDLm3Q1xYbFhlBeGtjeYzh4r2J0catScI/FOAVsEW7EprbRkwYqtavs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231332; c=relaxed/simple; bh=aS6HpbmyxvbjmHbJmaZLMg1p+vRZnZDTOx85EKv2YcA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VZDBn0rBgH1bOSEEP56UPSN7MC1iHtWb1eyRdalkUgZDq5m8jMNNSB717BMkFrQoVJmIC0OmBU1AkTSBYXtGCHDmWEJ7E5ceBifH0mJE903OyIei7dJupeClodHyiQtQcZ1FcFNzxWvUqZvcCKGWrRJrFlP4XNGXgSL0cYDZ964= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=BFmEoazA; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="BFmEoazA" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738231332; x=1769767332; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aS6HpbmyxvbjmHbJmaZLMg1p+vRZnZDTOx85EKv2YcA=; b=BFmEoazAp4vJSom91wSLJnmnzRbIMcl9RACEd/gSHR1a2PlSbEuUoKL9 bnNXij3srtvpIp30Zu2Rzb68YDvU1qzerDTh+hsA+pxpEZWg9XVj8OqIc i3crgHCIlT08JdUPawhF5v3tp9mQSxdErHRdsuhdgOeuQPIHKxXej2pa/ a+vb6xmwBsBQdRurvVyD8QxRd/mmiUWKBpOKESQEq+0gzQzJjMcJ2N81M L62MEDYohHhxYZTT32ixAweLagx15eP4LmBv8h0tOwQ+xjJ1UTQpkfHGn mFx0xM/Zcbptym9Qp9xwptddzTj0RZleIaVtErFUBSLTxTa5xm5OyYNxh w==; X-CSE-ConnectionGUID: 38wjL2A/ST6+HQPwKjz8NA== X-CSE-MsgGUID: Vu7Kql7qQX+4LhLGt7DvBw== X-IronPort-AV: E=McAfee;i="6700,10204,11330"; a="38648220" X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="38648220" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2025 02:02:11 -0800 X-CSE-ConnectionGUID: FEElXW6TTEKqjuklFWBz5g== X-CSE-MsgGUID: 8dwYoa7wR4mkal0gKBUanA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="140161515" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa001.fm.intel.com with ESMTP; 30 Jan 2025 02:01:02 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id CD0ED116; Thu, 30 Jan 2025 12:01:01 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv3 04/11] mm/zswap: Use PG_dropbehind instead of PG_reclaim Date: Thu, 30 Jan 2025 12:00:42 +0200 Message-ID: <20250130100050.1868208-5-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> References: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. Instead of using folio_set_reclaim(), use folio_set_dropbehind() in zswap_writeback_entry(). Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand Acked-by: Yosry Ahmed Acked-by: Nhat Pham --- mm/zswap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 6504174fbc6a..611adf3d46a5 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1102,8 +1102,8 @@ static int zswap_writeback_entry(struct zswap_entry *entry, /* folio is up to date */ folio_mark_uptodate(folio); - /* move it to the tail of the inactive list after end_writeback */ - folio_set_reclaim(folio); + /* free the folio after writeback */ + folio_set_dropbehind(folio); /* start writeback */ __swap_writepage(folio, &wbc); From patchwork Thu Jan 30 10:00:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13954419 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 845941B85F6; Thu, 30 Jan 2025 10:01:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231284; cv=none; b=dxCY6Q22vlxxnM5z+/KnoSViqz10hrXxL2ZvWAdn46o/fqSESyJ5xvIjKbszS+J0TJhHHCuiyW9niIwEkRjezHafkf1xlxhPIFNfmlbwNdAUu9k05jQkzGVlJkedc0Ea8MqGWp/rryvRFR+zvBYhawWFheg4PCivY1EizB4e7ic= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231284; c=relaxed/simple; bh=bSDJ3F3koSrA0Iqp0ErSGMFZJet+A87xTcyf884hYLs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XULcUOZSpkYek/k34pSQX+2x5rsYWTakeK6gaGZTlT/7N23cfOy6aS03kLYVLjych4JV3dzlTStOTWi9BsgU1+Qgawtkbb8b0Oy7WiTVs0xmIQZ6AHRc5s7HpJ6PNnVmAJoP6YrXQFi2ibG1XfgAn2RdG7xHsp0nBW7IcM5H/rI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=UXUt/KnL; arc=none smtp.client-ip=198.175.65.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UXUt/KnL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738231282; x=1769767282; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bSDJ3F3koSrA0Iqp0ErSGMFZJet+A87xTcyf884hYLs=; b=UXUt/KnLhD59pg+09nQlMYnBIHBI1mGlNYtIIMWJ7EHZr2DS+K3eZJGk fn+BwurHS7qevoBJFyEgoiTmhgXZeGDCk9BzqLtLzH6TW+HLGIwydGwtI bwcgiDUhigJX61PJVlHekSD6aiBctg+rb5cNWa+mFQe0q9h09EMHkKe/4 MGs3SneU5fdlZFRSTf2PaJAPX1whPVONqAIx/YCKzAznDjdS9DHrHP7R/ livJOPd8OQ58thgpJ04Lhp+UqMn63lKJMbP44SWl1muajrKubguDyxDEx wrZPRzD3VrgYuy1JaxGj7FpIgZrBkzPut/k2BAd7RuUhaEAiLu4YZck2m Q==; X-CSE-ConnectionGUID: cPMnkBnMTTuIrrqpwncQiA== X-CSE-MsgGUID: tBK4CsMdTGKiqmFrABVP2w== X-IronPort-AV: E=McAfee;i="6700,10204,11330"; a="49752473" X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="49752473" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2025 02:01:20 -0800 X-CSE-ConnectionGUID: /TQ93vBuTha3LJza9XOU3w== X-CSE-MsgGUID: jYDt27dDQGi5SZd9YtqzLQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="109187924" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa007.fm.intel.com with ESMTP; 30 Jan 2025 02:01:11 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id DE129157; Thu, 30 Jan 2025 12:01:01 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv3 05/11] mm/truncate: Use folio_set_dropbehind() instead of deactivate_file_folio() Date: Thu, 30 Jan 2025 12:00:43 +0200 Message-ID: <20250130100050.1868208-6-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> References: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. The new flag allows to replace whole deactivate_file_folio() machinery with simple folio_set_dropbehind(). Signed-off-by: Kirill A. Shutemov Acked-by: Yu Zhao --- mm/internal.h | 1 - mm/swap.c | 90 --------------------------------------------------- mm/truncate.c | 5 ++- 3 files changed, 4 insertions(+), 92 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 109ef30fee11..93e6dac2077a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -379,7 +379,6 @@ static inline vm_fault_t vmf_anon_prepare(struct vm_fault *vmf) vm_fault_t do_swap_page(struct vm_fault *vmf); void folio_rotate_reclaimable(struct folio *folio); bool __folio_end_writeback(struct folio *folio); -void deactivate_file_folio(struct folio *folio); void folio_activate(struct folio *folio); void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas, diff --git a/mm/swap.c b/mm/swap.c index fc8281ef4241..7a0dffd5973a 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -54,7 +54,6 @@ struct cpu_fbatches { */ local_lock_t lock; struct folio_batch lru_add; - struct folio_batch lru_deactivate_file; struct folio_batch lru_deactivate; struct folio_batch lru_lazyfree; #ifdef CONFIG_SMP @@ -524,68 +523,6 @@ void folio_add_lru_vma(struct folio *folio, struct vm_area_struct *vma) folio_add_lru(folio); } -/* - * If the folio cannot be invalidated, it is moved to the - * inactive list to speed up its reclaim. It is moved to the - * head of the list, rather than the tail, to give the flusher - * threads some time to write it out, as this is much more - * effective than the single-page writeout from reclaim. - * - * If the folio isn't mapped and dirty/writeback, the folio - * could be reclaimed asap using the reclaim flag. - * - * 1. active, mapped folio -> none - * 2. active, dirty/writeback folio -> inactive, head, reclaim - * 3. inactive, mapped folio -> none - * 4. inactive, dirty/writeback folio -> inactive, head, reclaim - * 5. inactive, clean -> inactive, tail - * 6. Others -> none - * - * In 4, it moves to the head of the inactive list so the folio is - * written out by flusher threads as this is much more efficient - * than the single-page writeout from reclaim. - */ -static void lru_deactivate_file(struct lruvec *lruvec, struct folio *folio) -{ - bool active = folio_test_active(folio) || lru_gen_enabled(); - long nr_pages = folio_nr_pages(folio); - - if (folio_test_unevictable(folio)) - return; - - /* Some processes are using the folio */ - if (folio_mapped(folio)) - return; - - lruvec_del_folio(lruvec, folio); - folio_clear_active(folio); - folio_clear_referenced(folio); - - if (folio_test_writeback(folio) || folio_test_dirty(folio)) { - /* - * Setting the reclaim flag could race with - * folio_end_writeback() and confuse readahead. But the - * race window is _really_ small and it's not a critical - * problem. - */ - lruvec_add_folio(lruvec, folio); - folio_set_reclaim(folio); - } else { - /* - * The folio's writeback ended while it was in the batch. - * We move that folio to the tail of the inactive list. - */ - lruvec_add_folio_tail(lruvec, folio); - __count_vm_events(PGROTATED, nr_pages); - } - - if (active) { - __count_vm_events(PGDEACTIVATE, nr_pages); - __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, - nr_pages); - } -} - static void lru_deactivate(struct lruvec *lruvec, struct folio *folio) { long nr_pages = folio_nr_pages(folio); @@ -652,10 +589,6 @@ void lru_add_drain_cpu(int cpu) local_unlock_irqrestore(&cpu_fbatches.lock_irq, flags); } - fbatch = &fbatches->lru_deactivate_file; - if (folio_batch_count(fbatch)) - folio_batch_move_lru(fbatch, lru_deactivate_file); - fbatch = &fbatches->lru_deactivate; if (folio_batch_count(fbatch)) folio_batch_move_lru(fbatch, lru_deactivate); @@ -667,28 +600,6 @@ void lru_add_drain_cpu(int cpu) folio_activate_drain(cpu); } -/** - * deactivate_file_folio() - Deactivate a file folio. - * @folio: Folio to deactivate. - * - * This function hints to the VM that @folio is a good reclaim candidate, - * for example if its invalidation fails due to the folio being dirty - * or under writeback. - * - * Context: Caller holds a reference on the folio. - */ -void deactivate_file_folio(struct folio *folio) -{ - /* Deactivating an unevictable folio will not accelerate reclaim */ - if (folio_test_unevictable(folio)) - return; - - if (lru_gen_enabled() && lru_gen_clear_refs(folio)) - return; - - folio_batch_add_and_move(folio, lru_deactivate_file, true); -} - /* * folio_deactivate - deactivate a folio * @folio: folio to deactivate @@ -772,7 +683,6 @@ static bool cpu_needs_drain(unsigned int cpu) /* Check these in order of likelihood that they're not zero */ return folio_batch_count(&fbatches->lru_add) || folio_batch_count(&fbatches->lru_move_tail) || - folio_batch_count(&fbatches->lru_deactivate_file) || folio_batch_count(&fbatches->lru_deactivate) || folio_batch_count(&fbatches->lru_lazyfree) || folio_batch_count(&fbatches->lru_activate) || diff --git a/mm/truncate.c b/mm/truncate.c index e2e115adfbc5..8efa4e325e54 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -486,7 +486,10 @@ unsigned long mapping_try_invalidate(struct address_space *mapping, * of interest and try to speed up its reclaim. */ if (!ret) { - deactivate_file_folio(folio); + if (!folio_test_unevictable(folio) && + !folio_mapped(folio)) + folio_set_dropbehind(folio); + /* Likely in the lru cache of a remote CPU */ if (nr_failed) (*nr_failed)++; From patchwork Thu Jan 30 10:00:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13954416 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 054C91B415E; Thu, 30 Jan 2025 10:01:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231282; cv=none; b=Y6cJ1wxcLkvdi2HYdh7OXXM5rM2oo0wawv0BqiVhFgNl9XGF77MiiMTqPgU4SY+vOWGN0a4yzIlOD6cwzWiawwMkaiw1fdvTMVuryjbe/DFlz4/k6I+ZONiaLGGrtxZnYZInvocfRs0irvg1DcoqyDV6QiqrLH/rYoVVaU+0jWE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231282; c=relaxed/simple; bh=ZCQ7ik+9XW25QHmj7+OSEt8uhXHwRm2WWtAH03zqXic=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WfGzJ+ZhcbeUlXeMnyNNAt3ei32DV9LoUlJL4ZFPiCZlS6XSQQiHUPQr3rVb8gv50DadFqW9cMXqU5ButGxU2gVU9xWks6PG8u3ni6+Qld7gopHrrdy9mEIM3NEMQA1t+O/984v1/oImiVm99tHGaKbaJYIHaq4U+D03G/Qp/V0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=TQxglEDO; arc=none smtp.client-ip=198.175.65.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="TQxglEDO" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738231281; x=1769767281; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZCQ7ik+9XW25QHmj7+OSEt8uhXHwRm2WWtAH03zqXic=; b=TQxglEDOcpbkVGb76c23U0i1pFN6y5APNDbwN8rbHyLDVoT6k3L3w+eL Vb+E6TY+3OYn+Atho2ZasulhYLmWus6Js9yBA4kRmHd5ed/UVADF/KvtR E15HSlORLrSI3KU9l823IBRvZTToi4OpsbqsaZyQHcivq+lFOnpvKrS7b NY6H4nzwJY6JNxNTEPFvj2iOD98wdEwo3O+vGxynVhK19kdBJMmoLZ+gx 6Ns6ZxA8/kMJSHsVl6x3JwRQ/XRaFmqBibv+YeRi03QmmX0CvSUbDHI2J c8N5hDuBsp1JdhRACjXbQC9EQVagvw90GeJ95G5DLjykMkyAJULgs8zxh Q==; X-CSE-ConnectionGUID: IoI40siMTYudmnFkqxNSOA== X-CSE-MsgGUID: oSJGZcT6TkOMCaeRBiJmTQ== X-IronPort-AV: E=McAfee;i="6700,10204,11330"; a="49752400" X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="49752400" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2025 02:01:19 -0800 X-CSE-ConnectionGUID: 8Ns0yl4eSvC8UZgQ8nbSkg== X-CSE-MsgGUID: lldpXDwSSe61zLr95B6law== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="109187920" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa007.fm.intel.com with ESMTP; 30 Jan 2025 02:01:11 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id EA239168; Thu, 30 Jan 2025 12:01:01 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv3 06/11] mm/vmscan: Use PG_dropbehind instead of PG_reclaim Date: Thu, 30 Jan 2025 12:00:44 +0200 Message-ID: <20250130100050.1868208-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> References: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. Instead of using folio_set_reclaim(), use folio_set_dropbehind() in pageout(). It is safe to leave PG_dropbehind on the folio if, for some reason (bug?), the folio is not in a writeback state after ->writepage(). In these cases, the kernel had to clear PG_reclaim as it shared a page flag bit with PG_readahead. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- mm/vmscan.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index bc1826020159..c97adb0fdaa4 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -692,19 +692,16 @@ static pageout_t pageout(struct folio *folio, struct address_space *mapping, if (shmem_mapping(mapping) && folio_test_large(folio)) wbc.list = folio_list; - folio_set_reclaim(folio); + folio_set_dropbehind(folio); + res = mapping->a_ops->writepage(&folio->page, &wbc); if (res < 0) handle_write_error(mapping, folio, res); if (res == AOP_WRITEPAGE_ACTIVATE) { - folio_clear_reclaim(folio); + folio_clear_dropbehind(folio); return PAGE_ACTIVATE; } - if (!folio_test_writeback(folio)) { - /* synchronous write or broken a_ops? */ - folio_clear_reclaim(folio); - } trace_mm_vmscan_write_folio(folio); node_stat_add_folio(folio, NR_VMSCAN_WRITE); return PAGE_SUCCESS; From patchwork Thu Jan 30 10:00:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13954417 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E51C81B4137; Thu, 30 Jan 2025 10:01:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231282; cv=none; b=VJE4F5HTmGfpVEgId2dOu2osgcRqc+UkK1krr7OAKJw3Fbl0tdvWr+0PKeZoFFgHeYZaJ2G+p9HBho3NQ8BhlTZkBguDQg10T/NuHPr36XMRxcAU8CpFES02m6pLmhDXYQ2hVKC3Mqbmg7o/vlprBRpLFceODgIj0CMkwmB+r1U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231282; c=relaxed/simple; bh=uLTdxStt6zGk69qmkf/XPay7U8WHi9X9PLQpdfC8JxI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fFSGv5gdIRdv1ywlfI7dw9IUwjBbt7Y2KnfGVVOkhUCHjEgTOi9LfG8oowjHxsegHLXy7vhTx35Qwd0adLfun0SUXgfSrZV3I4QEI9abWxqzqdsJ9OGyenvKYScTtZbHk/QxY2NzGuLa8R5rMQROIZLFlynTFyD8Fulbw+5f5Bg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=F2g8mI7n; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="F2g8mI7n" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738231281; x=1769767281; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uLTdxStt6zGk69qmkf/XPay7U8WHi9X9PLQpdfC8JxI=; b=F2g8mI7n1MteL7B5pladNsMa/QtVLZn7vRgumyviBWZvAVaiW90y+kFX AVyVBHkPWWIqFSni5Xm0HrIdugv1ihAV4PR6JCxrrbRrv2AOY3U9Y2KzN ef+trr+8GgivXpt0v9x/lImlftpWgNy4/LRVtIT8bUjXjP+HnzcZrxzRR gLW9whlFIM6asplw80P0/4Ctrbt1+7mPfWgHA/HlpiUjin32m+YzyMU6U 3KDuDyxpjn8JK5KCEKUJh+96CEiV1B/4QOyuPC/ZJIgH7b2UfAxjOdhYM eEtrk6NCPnuOFOhMPL+AwajMzCl2YQrbxlMrRHexgc5fspDj1rCj7heZ3 g==; X-CSE-ConnectionGUID: fwKWjDEdRkGnA6B3zlyorw== X-CSE-MsgGUID: P2jxd3tlSBqefbbXaByZ2w== X-IronPort-AV: E=McAfee;i="6700,10204,11330"; a="42692515" X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="42692515" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2025 02:01:20 -0800 X-CSE-ConnectionGUID: wYzGPc89R8eirclNmiOixw== X-CSE-MsgGUID: RqbTJ6J0Tra8rd9Soq9qMg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="114263384" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa003.jf.intel.com with ESMTP; 30 Jan 2025 02:01:12 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id F3C4A18B; Thu, 30 Jan 2025 12:01:01 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv3 07/11] mm/vmscan: Use PG_dropbehind instead of PG_reclaim in shrink_folio_list() Date: Thu, 30 Jan 2025 12:00:45 +0200 Message-ID: <20250130100050.1868208-8-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> References: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. Instead of using folio_set_reclaim(), use folio_set_dropbehind() in shrink_folio_list(). It is safe to leave PG_dropbehind on the folio if, for some reason (bug?), the folio is not in a writeback state after ->writepage(). In these cases, the kernel had to clear PG_reclaim as it shared a page flag bit with PG_readahead. Also use PG_dropbehind instead PG_reclaim to detect I/O congestion. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- mm/vmscan.c | 30 ++++++++---------------------- 1 file changed, 8 insertions(+), 22 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index c97adb0fdaa4..db6e4552997c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1140,7 +1140,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * for immediate reclaim are making it to the end of * the LRU a second time. */ - if (writeback && folio_test_reclaim(folio)) + if (writeback && folio_test_dropbehind(folio)) stat->nr_congested += nr_pages; /* @@ -1149,7 +1149,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * * 1) If reclaim is encountering an excessive number * of folios under writeback and this folio has both - * the writeback and reclaim flags set, then it + * the writeback and dropbehind flags set, then it * indicates that folios are being queued for I/O but * are being recycled through the LRU before the I/O * can complete. Waiting on the folio itself risks an @@ -1173,7 +1173,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * would probably show more reasons. * * 3) Legacy memcg encounters a folio that already has the - * reclaim flag set. memcg does not have any dirty folio + * dropbehind flag set. memcg does not have any dirty folio * throttling so we could easily OOM just because too many * folios are in writeback and there is nothing else to * reclaim. Wait for the writeback to complete. @@ -1190,30 +1190,16 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (folio_test_writeback(folio)) { /* Case 1 above */ if (current_is_kswapd() && - folio_test_reclaim(folio) && + folio_test_dropbehind(folio) && test_bit(PGDAT_WRITEBACK, &pgdat->flags)) { stat->nr_immediate += nr_pages; goto activate_locked; /* Case 2 above */ } else if (writeback_throttling_sane(sc) || - !folio_test_reclaim(folio) || + !folio_test_dropbehind(folio) || !may_enter_fs(folio, sc->gfp_mask)) { - /* - * This is slightly racy - - * folio_end_writeback() might have - * just cleared the reclaim flag, then - * setting the reclaim flag here ends up - * interpreted as the readahead flag - but - * that does not matter enough to care. - * What we do want is for this folio to - * have the reclaim flag set next time - * memcg reclaim reaches the tests above, - * so it will then wait for writeback to - * avoid OOM; and it's also appropriate - * in global reclaim. - */ - folio_set_reclaim(folio); + folio_set_dropbehind(folio); stat->nr_writeback += nr_pages; goto activate_locked; @@ -1368,7 +1354,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, */ if (folio_is_file_lru(folio) && (!current_is_kswapd() || - !folio_test_reclaim(folio) || + !folio_test_dropbehind(folio) || !test_bit(PGDAT_DIRTY, &pgdat->flags))) { /* * Immediately reclaim when written back. @@ -1378,7 +1364,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, */ node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE, nr_pages); - folio_set_reclaim(folio); + folio_set_dropbehind(folio); goto activate_locked; } From patchwork Thu Jan 30 10:00:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13954418 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58B011B425D; Thu, 30 Jan 2025 10:01:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231283; cv=none; b=W5r68BlcgEzgA+9AIZEzDaqrM0UnocJdfwnR4p5rrW/IUtDW7iBjCDK5ZUbsUSNRWGQ+xQ0603vS+jdaj3T6K2owUaOB29CH3wGZEQYAlEQZt/4WH1WBP+GbXqAk+zcUjRsOdXloo0J5bOZ3m4r9V0UOUNyfwYeutJYXBENcsLc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231283; c=relaxed/simple; bh=5OJMCwKJ2eiQBjQpL5TDjVdfD5oQ07qBePa+QeSVgXo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ac6j8bYOoRLGHoOHu9eJKxdlTXsO0DvhYEFyegzIz5H4mPEmCydUa/UCg5Q/YSaEakx4kk0G8lDNFuX6cWtPdZvyRGTrr8xCHhJtpU4STHX3KEL1lN9p2f9gV6g9uUWrkZg8O263hzQi06VT/7plx4y0b1IQMkqxkO2OznjHdL0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=jeDbI2ao; arc=none smtp.client-ip=198.175.65.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="jeDbI2ao" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738231282; x=1769767282; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5OJMCwKJ2eiQBjQpL5TDjVdfD5oQ07qBePa+QeSVgXo=; b=jeDbI2aoQBJ56onr6pZv68w4XRSrZ5RyhFbj2pwY2Uyl40MYZtI2PHxl dQwz7fEXtYTE5H0rILfEQMZUH76CmDrmAh+unn14CZSWW6Mb1DEc1u3sC vn2/3R+NkyOJZA63oRENt3NotZzQ9Oe/pgKABDiQnNLOt+7gEGUzCCGen SrbwNO6+etIINWCnZEBDD29Zrx5xrBFPgQNzoxQS72UY9lSjiU/2RmfcT JwA8fnXHtdIwIdJQ51vNUExgFi1fApnSAQzgYWjUTlS0nUgk/olZd1k67 77mBlG192kqBjmo7h3oZJJxLpYJO39yGG+pImtl360wN7/Sutk0k4Ot+x A==; X-CSE-ConnectionGUID: JzQ20m04RN6SYCyk/djVxw== X-CSE-MsgGUID: YzI7jntdQ5+5s4rLob/ePA== X-IronPort-AV: E=McAfee;i="6700,10204,11330"; a="49752432" X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="49752432" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2025 02:01:19 -0800 X-CSE-ConnectionGUID: g5p59eYbT7GNX0T7qHuo1g== X-CSE-MsgGUID: +zRYybF/S9iMPoERKTw43Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="109187922" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa007.fm.intel.com with ESMTP; 30 Jan 2025 02:01:11 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 0944C1B2; Thu, 30 Jan 2025 12:01:02 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv3 08/11] mm/mglru: Check PG_dropbehind instead of PG_reclaim in lru_gen_folio_seq() Date: Thu, 30 Jan 2025 12:00:46 +0200 Message-ID: <20250130100050.1868208-9-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> References: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Kernel sets PG_dropcache instead of PG_reclaim everywhere. Check PG_dropcache in lru_gen_folio_seq(). No need to check for dirty and writeback as there's no conflict with PG_readahead anymore. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- include/linux/mm_inline.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index f9157a0c42a5..f353d3c610ac 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -241,8 +241,7 @@ static inline unsigned long lru_gen_folio_seq(struct lruvec *lruvec, struct foli else if (reclaiming) gen = MAX_NR_GENS; else if ((!folio_is_file_lru(folio) && !folio_test_swapcache(folio)) || - (folio_test_reclaim(folio) && - (folio_test_dirty(folio) || folio_test_writeback(folio)))) + folio_test_dropbehind(folio)) gen = MIN_NR_GENS; else gen = MAX_NR_GENS - folio_test_workingset(folio); From patchwork Thu Jan 30 10:00:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13954421 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD11A1C5F25; Thu, 30 Jan 2025 10:01:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231286; cv=none; b=aw4ojmduPnxcxIltqr7UtDDsNkMQCxqKwqxfvsI+b6YUsS/GE/hnGAM6CsiZoGim7x8my0RgXGXovb0np7nvr3Err8bLyr9vu/uVbQP7NooKdqkFWGvl0OaiJPbToBktO1RZKmGO2NU1BoAKJSA03yaudzLipRdkOIZk3NvGhOk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231286; c=relaxed/simple; bh=BeKZE7yI/dUVKapN4Er8+6el6GC3efn4NFp3AHi3T6s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FRb8aOTWiBOZMZcURk/JEI64UdiExuWYGfuNjutkwN4LHP3/RVYIT8SWm8+yRCO28fL83eHJSkkfvoE28olnO07nZJdI9QSMnwZrjIA/pEmfqL8agtsMEibpV5RV5Ml2vOCN6JXMUhxJ4XHDUHm0Xhk5HsRAb4F/vPHZDDC1b3E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Y+bJrZji; arc=none smtp.client-ip=198.175.65.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Y+bJrZji" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738231284; x=1769767284; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BeKZE7yI/dUVKapN4Er8+6el6GC3efn4NFp3AHi3T6s=; b=Y+bJrZjiwj1Fk5u4gDn+6WZiVPxhmd6y+Pw0nKS+b/f5lIrWKqU+yHiH VUU3TvWiXzF5Z7CWtg21QZ4asNVG+lYQ1SHit+k4yA9QxjeYQvVgJX1wp acyYW5qkcg0xekVEu5GBM+/Gcc2y/9cD6oFsHcDQklqSGhP1sf5Vu+YWz x4eCZgHXl+AZmu6D5iOT2bUOGjOV5qD+oFnDT2+q4Uxc/yPAK2xfl6SiZ iZ6WvACtdBY2Ef2TNwlTUGMyha2WwNdbIlxmhnanZqEn+8234wz9jV5e7 V4JjU2vs769USUnNIA/lZ7brLu/VooeYwLbAx7rkyutdFi8UPDUy1iY75 w==; X-CSE-ConnectionGUID: cwK7wNmFQ2q2iuQsdw9JDA== X-CSE-MsgGUID: 4kDXQ2CBQPSZofAcJvyQrw== X-IronPort-AV: E=McAfee;i="6700,10204,11330"; a="49752503" X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="49752503" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2025 02:01:20 -0800 X-CSE-ConnectionGUID: ugXC5d4QTUGfQPVPTZw8xQ== X-CSE-MsgGUID: HQbOjyHrSNyc/KVIGuFOKw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="109187926" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa007.fm.intel.com with ESMTP; 30 Jan 2025 02:01:11 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 139101B5; Thu, 30 Jan 2025 12:01:02 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv3 09/11] mm: Remove PG_reclaim Date: Thu, 30 Jan 2025 12:00:47 +0200 Message-ID: <20250130100050.1868208-10-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> References: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Nobody sets the flag anymore. Remove the PG_reclaim, making PG_readhead exclusive user of the page flag bit. Signed-off-by: Kirill A. Shutemov Acked-by: Yu Zhao --- fs/fuse/dev.c | 2 +- fs/proc/page.c | 2 +- include/linux/mm_inline.h | 15 ------- include/linux/page-flags.h | 15 +++---- include/trace/events/mmflags.h | 2 +- include/uapi/linux/kernel-page-flags.h | 2 +- mm/filemap.c | 12 ----- mm/migrate.c | 10 +---- mm/page-writeback.c | 16 +------ mm/page_io.c | 15 +++---- mm/swap.c | 61 ++------------------------ mm/vmscan.c | 7 --- tools/mm/page-types.c | 8 +--- 13 files changed, 22 insertions(+), 145 deletions(-) diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index 27ccae63495d..20005e2e1d28 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -827,7 +827,7 @@ static int fuse_check_folio(struct folio *folio) 1 << PG_lru | 1 << PG_active | 1 << PG_workingset | - 1 << PG_reclaim | + 1 << PG_readahead | 1 << PG_waiters | LRU_GEN_MASK | LRU_REFS_MASK))) { dump_page(&folio->page, "fuse: trying to steal weird page"); diff --git a/fs/proc/page.c b/fs/proc/page.c index a55f5acefa97..59860ba2393c 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -189,7 +189,7 @@ u64 stable_page_flags(const struct page *page) u |= kpf_copy_bit(k, KPF_LRU, PG_lru); u |= kpf_copy_bit(k, KPF_REFERENCED, PG_referenced); u |= kpf_copy_bit(k, KPF_ACTIVE, PG_active); - u |= kpf_copy_bit(k, KPF_RECLAIM, PG_reclaim); + u |= kpf_copy_bit(k, KPF_READAHEAD, PG_readahead); #define SWAPCACHE ((1 << PG_swapbacked) | (1 << PG_swapcache)) if ((k & SWAPCACHE) == SWAPCACHE) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index f353d3c610ac..e5049a975579 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -270,7 +270,6 @@ static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, set_mask_bits(&folio->flags, LRU_GEN_MASK | BIT(PG_active), flags); lru_gen_update_size(lruvec, folio, -1, gen); - /* for folio_rotate_reclaimable() */ if (reclaiming) list_add_tail(&folio->lru, &lrugen->folios[gen][type][zone]); else @@ -349,20 +348,6 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio) list_add(&folio->lru, &lruvec->lists[lru]); } -static __always_inline -void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio) -{ - enum lru_list lru = folio_lru_list(folio); - - if (lru_gen_add_folio(lruvec, folio, true)) - return; - - update_lru_size(lruvec, lru, folio_zonenum(folio), - folio_nr_pages(folio)); - /* This is not expected to be used on LRU_UNEVICTABLE */ - list_add_tail(&folio->lru, &lruvec->lists[lru]); -} - static __always_inline void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio) { diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 3f6a64ff968a..8cbfb82e7b4f 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -63,8 +63,8 @@ * might lose their PG_swapbacked flag when they simply can be dropped (e.g. as * a result of MADV_FREE). * - * PG_referenced, PG_reclaim are used for page reclaim for anonymous and - * file-backed pagecache (see mm/vmscan.c). + * PG_referenced is used for page reclaim for anonymous and file-backed + * pagecache (see mm/vmscan.c). * * PG_arch_1 is an architecture specific page state bit. The generic code * guarantees that this bit is cleared for a page when it first is entered into @@ -107,7 +107,7 @@ enum pageflags { PG_reserved, PG_private, /* If pagecache, has fs-private data */ PG_private_2, /* If pagecache, has fs aux data */ - PG_reclaim, /* To be reclaimed asap */ + PG_readahead, PG_swapbacked, /* Page is backed by RAM/swap */ PG_unevictable, /* Page is "unevictable" */ PG_dropbehind, /* drop pages on IO completion */ @@ -129,8 +129,6 @@ enum pageflags { #endif __NR_PAGEFLAGS, - PG_readahead = PG_reclaim, - /* Anonymous memory (and shmem) */ PG_swapcache = PG_owner_priv_1, /* Swap page: swp_entry_t in private */ /* Some filesystems */ @@ -168,7 +166,7 @@ enum pageflags { PG_xen_remapped = PG_owner_priv_1, /* non-lru isolated movable page */ - PG_isolated = PG_reclaim, + PG_isolated = PG_readahead, /* Only valid for buddy pages. Used to track pages that are reported */ PG_reported = PG_uptodate, @@ -187,7 +185,7 @@ enum pageflags { /* At least one page in this folio has the hwpoison flag set */ PG_has_hwpoisoned = PG_active, PG_large_rmappable = PG_workingset, /* anon or file-backed */ - PG_partially_mapped = PG_reclaim, /* was identified to be partially mapped */ + PG_partially_mapped = PG_readahead, /* was identified to be partially mapped */ }; #define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1) @@ -594,9 +592,6 @@ TESTPAGEFLAG(Writeback, writeback, PF_NO_TAIL) TESTSCFLAG(Writeback, writeback, PF_NO_TAIL) FOLIO_FLAG(mappedtodisk, FOLIO_HEAD_PAGE) -/* PG_readahead is only used for reads; PG_reclaim is only for writes */ -PAGEFLAG(Reclaim, reclaim, PF_NO_TAIL) - TESTCLEARFLAG(Reclaim, reclaim, PF_NO_TAIL) FOLIO_FLAG(readahead, FOLIO_HEAD_PAGE) FOLIO_TEST_CLEAR_FLAG(readahead, FOLIO_HEAD_PAGE) diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 72fbfe3caeaf..ff5d2e5da569 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -177,7 +177,7 @@ TRACE_DEFINE_ENUM(___GFP_LAST_BIT); DEF_PAGEFLAG_NAME(private_2), \ DEF_PAGEFLAG_NAME(writeback), \ DEF_PAGEFLAG_NAME(head), \ - DEF_PAGEFLAG_NAME(reclaim), \ + DEF_PAGEFLAG_NAME(readahead), \ DEF_PAGEFLAG_NAME(swapbacked), \ DEF_PAGEFLAG_NAME(unevictable), \ DEF_PAGEFLAG_NAME(dropbehind) \ diff --git a/include/uapi/linux/kernel-page-flags.h b/include/uapi/linux/kernel-page-flags.h index ff8032227876..e5a9a113e079 100644 --- a/include/uapi/linux/kernel-page-flags.h +++ b/include/uapi/linux/kernel-page-flags.h @@ -15,7 +15,7 @@ #define KPF_ACTIVE 6 #define KPF_SLAB 7 #define KPF_WRITEBACK 8 -#define KPF_RECLAIM 9 +#define KPF_READAHEAD 9 #define KPF_BUDDY 10 /* 11-20: new additions in 2.6.31 */ diff --git a/mm/filemap.c b/mm/filemap.c index 804d7365680c..ffbf3bda2a38 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1625,18 +1625,6 @@ void folio_end_writeback(struct folio *folio) VM_BUG_ON_FOLIO(!folio_test_writeback(folio), folio); - /* - * folio_test_clear_reclaim() could be used here but it is an - * atomic operation and overkill in this particular case. Failing - * to shuffle a folio marked for immediate reclaim is too mild - * a gain to justify taking an atomic operation penalty at the - * end of every folio writeback. - */ - if (folio_test_reclaim(folio)) { - folio_clear_reclaim(folio); - folio_rotate_reclaimable(folio); - } - /* * Writeback does not hold a folio reference of its own, relying * on truncation to wait for the clearing of PG_writeback. diff --git a/mm/migrate.c b/mm/migrate.c index 1fb0698273f7..19f913090aed 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -690,6 +690,8 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) folio_set_young(newfolio); if (folio_test_idle(folio)) folio_set_idle(newfolio); + if (folio_test_readahead(folio)) + folio_set_readahead(newfolio); folio_migrate_refs(newfolio, folio); /* @@ -732,14 +734,6 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) if (folio_test_writeback(newfolio)) folio_end_writeback(newfolio); - /* - * PG_readahead shares the same bit with PG_reclaim. The above - * end_page_writeback() may clear PG_readahead mistakenly, so set the - * bit after that. - */ - if (folio_test_readahead(folio)) - folio_set_readahead(newfolio); - folio_copy_owner(newfolio, folio); pgalloc_tag_swap(newfolio, folio); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 4f5970723cf2..f2b94a2cbfcf 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2888,22 +2888,8 @@ bool folio_mark_dirty(struct folio *folio) { struct address_space *mapping = folio_mapping(folio); - if (likely(mapping)) { - /* - * readahead/folio_deactivate could remain - * PG_readahead/PG_reclaim due to race with folio_end_writeback - * About readahead, if the folio is written, the flags would be - * reset. So no problem. - * About folio_deactivate, if the folio is redirtied, - * the flag will be reset. So no problem. but if the - * folio is used by readahead it will confuse readahead - * and make it restart the size rampup process. But it's - * a trivial problem. - */ - if (folio_test_reclaim(folio)) - folio_clear_reclaim(folio); + if (likely(mapping)) return mapping->a_ops->dirty_folio(mapping, folio); - } return noop_dirty_folio(mapping, folio); } diff --git a/mm/page_io.c b/mm/page_io.c index 9b983de351f9..0cb71f318fb1 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -37,14 +37,11 @@ static void __end_swap_bio_write(struct bio *bio) * Re-dirty the page in order to avoid it being reclaimed. * Also print a dire warning that things will go BAD (tm) * very quickly. - * - * Also clear PG_reclaim to avoid folio_rotate_reclaimable() */ folio_mark_dirty(folio); pr_alert_ratelimited("Write-error on swap-device (%u:%u:%llu)\n", MAJOR(bio_dev(bio)), MINOR(bio_dev(bio)), (unsigned long long)bio->bi_iter.bi_sector); - folio_clear_reclaim(folio); } folio_end_writeback(folio); } @@ -350,19 +347,17 @@ static void sio_write_complete(struct kiocb *iocb, long ret) if (ret != sio->len) { /* - * In the case of swap-over-nfs, this can be a - * temporary failure if the system has limited - * memory for allocating transmit buffers. - * Mark the page dirty and avoid - * folio_rotate_reclaimable but rate-limit the - * messages. + * In the case of swap-over-nfs, this can be a temporary failure + * if the system has limited memory for allocating transmit + * buffers. + * + * Mark the page dirty but rate-limit the messages. */ pr_err_ratelimited("Write error %ld on dio swapfile (%llu)\n", ret, swap_dev_pos(page_swap_entry(page))); for (p = 0; p < sio->pages; p++) { page = sio->bvec[p].bv_page; set_page_dirty(page); - ClearPageReclaim(page); } } diff --git a/mm/swap.c b/mm/swap.c index 7a0dffd5973a..96892a0d2491 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -59,14 +59,10 @@ struct cpu_fbatches { #ifdef CONFIG_SMP struct folio_batch lru_activate; #endif - /* Protecting the following batches which require disabling interrupts */ - local_lock_t lock_irq; - struct folio_batch lru_move_tail; }; static DEFINE_PER_CPU(struct cpu_fbatches, cpu_fbatches) = { .lock = INIT_LOCAL_LOCK(lock), - .lock_irq = INIT_LOCAL_LOCK(lock_irq), }; static void __page_cache_release(struct folio *folio, struct lruvec **lruvecp, @@ -175,29 +171,20 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn) } static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch, - struct folio *folio, move_fn_t move_fn, - bool on_lru, bool disable_irq) + struct folio *folio, move_fn_t move_fn, bool on_lru) { - unsigned long flags; - if (on_lru && !folio_test_clear_lru(folio)) return; folio_get(folio); - if (disable_irq) - local_lock_irqsave(&cpu_fbatches.lock_irq, flags); - else - local_lock(&cpu_fbatches.lock); + local_lock(&cpu_fbatches.lock); if (!folio_batch_add(this_cpu_ptr(fbatch), folio) || folio_test_large(folio) || lru_cache_disabled()) folio_batch_move_lru(this_cpu_ptr(fbatch), move_fn); - if (disable_irq) - local_unlock_irqrestore(&cpu_fbatches.lock_irq, flags); - else - local_unlock(&cpu_fbatches.lock); + local_unlock(&cpu_fbatches.lock); } #define folio_batch_add_and_move(folio, op, on_lru) \ @@ -205,37 +192,9 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch, &cpu_fbatches.op, \ folio, \ op, \ - on_lru, \ - offsetof(struct cpu_fbatches, op) >= offsetof(struct cpu_fbatches, lock_irq) \ + on_lru \ ) -static void lru_move_tail(struct lruvec *lruvec, struct folio *folio) -{ - if (folio_test_unevictable(folio)) - return; - - lruvec_del_folio(lruvec, folio); - folio_clear_active(folio); - lruvec_add_folio_tail(lruvec, folio); - __count_vm_events(PGROTATED, folio_nr_pages(folio)); -} - -/* - * Writeback is about to end against a folio which has been marked for - * immediate reclaim. If it still appears to be reclaimable, move it - * to the tail of the inactive list. - * - * folio_rotate_reclaimable() must disable IRQs, to prevent nasty races. - */ -void folio_rotate_reclaimable(struct folio *folio) -{ - if (folio_test_locked(folio) || folio_test_dirty(folio) || - folio_test_unevictable(folio)) - return; - - folio_batch_add_and_move(folio, lru_move_tail, true); -} - void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_io, unsigned int nr_rotated) { @@ -578,17 +537,6 @@ void lru_add_drain_cpu(int cpu) if (folio_batch_count(fbatch)) folio_batch_move_lru(fbatch, lru_add); - fbatch = &fbatches->lru_move_tail; - /* Disabling interrupts below acts as a compiler barrier. */ - if (data_race(folio_batch_count(fbatch))) { - unsigned long flags; - - /* No harm done if a racing interrupt already did this */ - local_lock_irqsave(&cpu_fbatches.lock_irq, flags); - folio_batch_move_lru(fbatch, lru_move_tail); - local_unlock_irqrestore(&cpu_fbatches.lock_irq, flags); - } - fbatch = &fbatches->lru_deactivate; if (folio_batch_count(fbatch)) folio_batch_move_lru(fbatch, lru_deactivate); @@ -682,7 +630,6 @@ static bool cpu_needs_drain(unsigned int cpu) /* Check these in order of likelihood that they're not zero */ return folio_batch_count(&fbatches->lru_add) || - folio_batch_count(&fbatches->lru_move_tail) || folio_batch_count(&fbatches->lru_deactivate) || folio_batch_count(&fbatches->lru_lazyfree) || folio_batch_count(&fbatches->lru_activate) || diff --git a/mm/vmscan.c b/mm/vmscan.c index db6e4552997c..4bead1ff5cd2 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3221,9 +3221,6 @@ static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclai new_flags = old_flags & ~(LRU_GEN_MASK | LRU_REFS_FLAGS); new_flags |= (new_gen + 1UL) << LRU_GEN_PGOFF; - /* for folio_end_writeback() */ - if (reclaiming) - new_flags |= BIT(PG_reclaim); } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags)); lru_gen_update_size(lruvec, folio, old_gen, new_gen); @@ -4465,9 +4462,6 @@ static bool isolate_folio(struct lruvec *lruvec, struct folio *folio, struct sca if (!folio_test_referenced(folio)) set_mask_bits(&folio->flags, LRU_REFS_MASK, 0); - /* for shrink_folio_list() */ - folio_clear_reclaim(folio); - success = lru_gen_del_folio(lruvec, folio, true); VM_WARN_ON_ONCE_FOLIO(!success, folio); @@ -4664,7 +4658,6 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap continue; } - /* retry folios that may have missed folio_rotate_reclaimable() */ if (!skip_retry && !folio_test_active(folio) && !folio_mapped(folio) && !folio_test_dirty(folio) && !folio_test_writeback(folio)) { list_move(&folio->lru, &clean); diff --git a/tools/mm/page-types.c b/tools/mm/page-types.c index bcac7ebfb51f..c06647501370 100644 --- a/tools/mm/page-types.c +++ b/tools/mm/page-types.c @@ -85,7 +85,6 @@ * not part of kernel API */ #define KPF_ANON_EXCLUSIVE 47 -#define KPF_READAHEAD 48 #define KPF_SLUB_FROZEN 50 #define KPF_SLUB_DEBUG 51 #define KPF_FILE 61 @@ -108,7 +107,7 @@ static const char * const page_flag_names[] = { [KPF_ACTIVE] = "A:active", [KPF_SLAB] = "S:slab", [KPF_WRITEBACK] = "W:writeback", - [KPF_RECLAIM] = "I:reclaim", + [KPF_READAHEAD] = "I:readahead", [KPF_BUDDY] = "B:buddy", [KPF_MMAP] = "M:mmap", @@ -139,7 +138,6 @@ static const char * const page_flag_names[] = { [KPF_ARCH_2] = "H:arch_2", [KPF_ANON_EXCLUSIVE] = "d:anon_exclusive", - [KPF_READAHEAD] = "I:readahead", [KPF_SLUB_FROZEN] = "A:slub_frozen", [KPF_SLUB_DEBUG] = "E:slub_debug", @@ -484,10 +482,6 @@ static uint64_t expand_overloaded_flags(uint64_t flags, uint64_t pme) flags ^= BIT(ERROR) | BIT(SLUB_DEBUG); } - /* PG_reclaim is overloaded as PG_readahead in the read path */ - if ((flags & (BIT(RECLAIM) | BIT(WRITEBACK))) == BIT(RECLAIM)) - flags ^= BIT(RECLAIM) | BIT(READAHEAD); - if (pme & PM_SOFT_DIRTY) flags |= BIT(SOFTDIRTY); if (pme & PM_FILE) From patchwork Thu Jan 30 10:00:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13954420 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A66C11C07C8; Thu, 30 Jan 2025 10:01:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231284; cv=none; b=uoQDJG8TEnjidfvIrXFV9cYM9ks+zp2sXgAqZhgHCkxwDBMpMJgR+EKYf8pBncDkdfbJKaxM2omi1kFCLrZYEw6VAUbYpTOLSuKMI8QPJWafQvNwYgVRrwDGF8o4Xs/QovJT8h0pPP1ZxNE6SUv3rX20Hmn1rZgJdlfY1sm7pz0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231284; c=relaxed/simple; bh=szEhvue7tvwHIQZ0VmyVu64vw/Pobf8bTFzicSKRI7A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=M3f4BuooTN/p2SMRLyGRtfPnA0i/V0R85iT7n3C+Rim52FebfcKaiWMIYogj1a/AaSBbH5P3y1JX79C2IRMrmkT8D2czpOJ+INbqhmPArzcDprBC46RDgtmDjmM5lySvOsCJ6BRdAELn5ukDybhGjLm2aB4wE82HGk1kU9OM6CA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=IVl1hh/E; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IVl1hh/E" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738231283; x=1769767283; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=szEhvue7tvwHIQZ0VmyVu64vw/Pobf8bTFzicSKRI7A=; b=IVl1hh/Ev9sN7u8ck83TCl/sSRxCjgQahx2LcefUf3M5QL2G1RzrjDFD y+my8IbC3Vti0VrTHRWStxQgQM1cIFwzU7Rl4PqPaoVDk1vhRNDzAgOu7 zIoXnMgdrzhYJ6KYbr5OpjQfCm8oWjCGznoh0URkoSxKEwCuPcB8+h4mM wwjCPbr+VKyolCVpiSCsQVLsIPMeWs4lEMJf87gaC+JoJaJLvnvCryJGN muFyCI17Lkp0ghQBMUacowYN4yZIOcaav2/AaLvmWXF8XmxfY/ny6XQC+ lPpje1LQKB6GOY1mo+Nq3tNBW65bzGC+pnY3qQ2l1PGQ6mms10snZx/r3 w==; X-CSE-ConnectionGUID: iKSfvd5pSziaviLZDTYXnw== X-CSE-MsgGUID: aY9qBY4STwmnTA7WtFXeDQ== X-IronPort-AV: E=McAfee;i="6700,10204,11330"; a="42692534" X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="42692534" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2025 02:01:20 -0800 X-CSE-ConnectionGUID: lcqnGOMgSX2cDcYWUs2yxQ== X-CSE-MsgGUID: GkOkS/VOT7+BISgs+ghD9w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="114263398" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa003.jf.intel.com with ESMTP; 30 Jan 2025 02:01:12 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 1CC531BD; Thu, 30 Jan 2025 12:01:02 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv3 10/11] mm/vmscan: Do not demote PG_dropbehind folios Date: Thu, 30 Jan 2025 12:00:48 +0200 Message-ID: <20250130100050.1868208-11-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> References: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 PG_dropbehind flag indicates that the folio need to be freed immediately. No point in demoting it. Signed-off-by: Kirill A. Shutemov --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 4bead1ff5cd2..4f86e020759e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1231,7 +1231,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * Before reclaiming the folio, try to relocate * its contents to another node. */ - if (do_demote_pass && + if (do_demote_pass && !folio_test_dropbehind(folio) && (thp_migration_supported() || !folio_test_large(folio))) { list_add(&folio->lru, &demote_folios); folio_unlock(folio); From patchwork Thu Jan 30 10:00:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13954422 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23A7F1D63CC; Thu, 30 Jan 2025 10:01:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231286; cv=none; b=VW4E3Jw+zzUZf4+naKZ1xqNQszd8M9GMGB24wXdhUDazmCLan6XWsY54OMF2boUMlNxoHSnt0pBVTxL2URFvFbraq+ap5SJw7m/9SxJ4VOQUObWYBghei95xu+zWeWNL6iEWK9tsPlUjmanXLi0Rz3IYwGijOZeIf7zuK3V0UyA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738231286; c=relaxed/simple; bh=c4T471B7fvYNCbP+qsYuuIhqfUQfIwOUu3Gqyl2QepM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Cbf46awe5uCBY9T2lvHd8YcmZYmfsh9prmhsmGZzS8rqJAzL97b1aGvkWk7UnJrwPa4rMx9Lc/FF2olWEGA11sEbFKDYGH7PYVNcRmwG0vO1bJHeAOb8Aetu555HgXminckw7MRdSc1EI2o8gIWshTRC45NNSNhf9XX2BjdGRN8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=g3z9B+1H; arc=none smtp.client-ip=198.175.65.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="g3z9B+1H" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738231284; x=1769767284; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=c4T471B7fvYNCbP+qsYuuIhqfUQfIwOUu3Gqyl2QepM=; b=g3z9B+1H4ACzrPlqdy7CMS/kkyW9m/GiUDApm2YjgNVtwfV3aj+fRQKk esJ8yoXVm68X+QdzP6NuJtyN04P7vV1sFVWlKKqEosWvq6jMClzfGUEZT +aOXaEMfAl0wAHezWRqkV7eG1hzJ0l8XY2Adyzp65xNMn6s1u0nnbFpp5 64EXvx/7OtYDiY8+X5ubPfK0Gz7SuuGLgNkrwIkIx+slM43kdRjH1t9md 8Yl1UqoZWHeLCK9AHdORFPq95DGAW9AW5Rshyz52+3pIswfysj09bq0P4 CHp1bKlNh4oTpTxGuZbvWFwij6v/ygWbqKogQFtt6TP0TFjNPHtaTtYt+ A==; X-CSE-ConnectionGUID: Ou/VxO0hQUK39Eg9mpMRbA== X-CSE-MsgGUID: SXdmffQjSaqX0FNJoEk/uw== X-IronPort-AV: E=McAfee;i="6700,10204,11330"; a="49752509" X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="49752509" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2025 02:01:20 -0800 X-CSE-ConnectionGUID: gl9jmnbSTFay3H9oYqDCmQ== X-CSE-MsgGUID: QULj6oEnRY+5uh9IDQThJg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,245,1732608000"; d="scan'208";a="109187930" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa007.fm.intel.com with ESMTP; 30 Jan 2025 02:01:11 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 267861C6; Thu, 30 Jan 2025 12:01:02 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv3 11/11] mm: Rename PG_dropbehind to PG_reclaim Date: Thu, 30 Jan 2025 12:00:49 +0200 Message-ID: <20250130100050.1868208-12-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> References: <20250130100050.1868208-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Now as PG_reclaim is gone, its name can be reclaimed for better use :) Rename PG_dropbehind to PG_reclaim and rename all helpers around it. Signed-off-by: Kirill A. Shutemov Acked-by: Yu Zhao --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 2 +- include/linux/mm_inline.h | 2 +- include/linux/page-flags.h | 8 +++--- include/linux/pagemap.h | 2 +- include/trace/events/mmflags.h | 2 +- mm/filemap.c | 34 +++++++++++------------ mm/migrate.c | 4 +-- mm/readahead.c | 4 +-- mm/swap.c | 2 +- mm/truncate.c | 2 +- mm/vmscan.c | 22 +++++++-------- mm/zswap.c | 2 +- 12 files changed, 43 insertions(+), 43 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index c1724847c001..e543e6bfb093 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -329,7 +329,7 @@ void __shmem_writeback(size_t size, struct address_space *mapping) if (!folio_mapped(folio) && folio_clear_dirty_for_io(folio)) { int ret; - folio_set_dropbehind(folio); + folio_set_reclaim(folio); ret = mapping->a_ops->writepage(&folio->page, &wbc); if (!ret) goto put; diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index e5049a975579..9077ba15bc36 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -241,7 +241,7 @@ static inline unsigned long lru_gen_folio_seq(struct lruvec *lruvec, struct foli else if (reclaiming) gen = MAX_NR_GENS; else if ((!folio_is_file_lru(folio) && !folio_test_swapcache(folio)) || - folio_test_dropbehind(folio)) + folio_test_reclaim(folio)) gen = MIN_NR_GENS; else gen = MAX_NR_GENS - folio_test_workingset(folio); diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 8cbfb82e7b4f..f727e2acd467 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -110,7 +110,7 @@ enum pageflags { PG_readahead, PG_swapbacked, /* Page is backed by RAM/swap */ PG_unevictable, /* Page is "unevictable" */ - PG_dropbehind, /* drop pages on IO completion */ + PG_reclaim, /* drop pages on IO completion */ #ifdef CONFIG_MMU PG_mlocked, /* Page is vma mlocked */ #endif @@ -595,9 +595,9 @@ FOLIO_FLAG(mappedtodisk, FOLIO_HEAD_PAGE) FOLIO_FLAG(readahead, FOLIO_HEAD_PAGE) FOLIO_TEST_CLEAR_FLAG(readahead, FOLIO_HEAD_PAGE) -FOLIO_FLAG(dropbehind, FOLIO_HEAD_PAGE) - FOLIO_TEST_CLEAR_FLAG(dropbehind, FOLIO_HEAD_PAGE) - __FOLIO_SET_FLAG(dropbehind, FOLIO_HEAD_PAGE) +FOLIO_FLAG(reclaim, FOLIO_HEAD_PAGE) + FOLIO_TEST_CLEAR_FLAG(reclaim, FOLIO_HEAD_PAGE) + __FOLIO_SET_FLAG(reclaim, FOLIO_HEAD_PAGE) #ifdef CONFIG_HIGHMEM /* diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 47bfc6b1b632..091c72a07ef4 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -1360,7 +1360,7 @@ struct readahead_control { pgoff_t _index; unsigned int _nr_pages; unsigned int _batch_count; - bool dropbehind; + bool reclaim; bool _workingset; unsigned long _pflags; }; diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index ff5d2e5da569..8597dc4125e3 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -180,7 +180,7 @@ TRACE_DEFINE_ENUM(___GFP_LAST_BIT); DEF_PAGEFLAG_NAME(readahead), \ DEF_PAGEFLAG_NAME(swapbacked), \ DEF_PAGEFLAG_NAME(unevictable), \ - DEF_PAGEFLAG_NAME(dropbehind) \ + DEF_PAGEFLAG_NAME(reclaim) \ IF_HAVE_PG_MLOCK(mlocked) \ IF_HAVE_PG_HWPOISON(hwpoison) \ IF_HAVE_PG_IDLE(idle) \ diff --git a/mm/filemap.c b/mm/filemap.c index ffbf3bda2a38..4fe551037bf7 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1591,11 +1591,11 @@ int folio_wait_private_2_killable(struct folio *folio) EXPORT_SYMBOL(folio_wait_private_2_killable); /* - * If folio was marked as dropbehind, then pages should be dropped when writeback + * If folio was marked as reclaim, then pages should be dropped when writeback * completes. Do that now. If we fail, it's likely because of a big folio - - * just reset dropbehind for that case and latter completions should invalidate. + * just reset reclaim for that case and latter completions should invalidate. */ -static void folio_end_dropbehind_write(struct folio *folio) +static void folio_end_reclaim_write(struct folio *folio) { /* * Hitting !in_task() should not happen off RWF_DONTCACHE writeback, @@ -1621,7 +1621,7 @@ static void folio_end_dropbehind_write(struct folio *folio) */ void folio_end_writeback(struct folio *folio) { - bool folio_dropbehind = false; + bool folio_reclaim = false; VM_BUG_ON_FOLIO(!folio_test_writeback(folio), folio); @@ -1633,13 +1633,13 @@ void folio_end_writeback(struct folio *folio) */ folio_get(folio); if (!folio_test_dirty(folio)) - folio_dropbehind = folio_test_clear_dropbehind(folio); + folio_reclaim = folio_test_clear_reclaim(folio); if (__folio_end_writeback(folio)) folio_wake_bit(folio, PG_writeback); acct_reclaim_writeback(folio); - if (folio_dropbehind) - folio_end_dropbehind_write(folio); + if (folio_reclaim) + folio_end_reclaim_write(folio); folio_put(folio); } EXPORT_SYMBOL(folio_end_writeback); @@ -1963,7 +1963,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, if (fgp_flags & FGP_ACCESSED) __folio_set_referenced(folio); if (fgp_flags & FGP_DONTCACHE) - __folio_set_dropbehind(folio); + __folio_set_reclaim(folio); err = filemap_add_folio(mapping, folio, index, gfp); if (!err) @@ -1987,8 +1987,8 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, if (!folio) return ERR_PTR(-ENOENT); /* not an uncached lookup, clear uncached if set */ - if (folio_test_dropbehind(folio) && !(fgp_flags & FGP_DONTCACHE)) - folio_clear_dropbehind(folio); + if (folio_test_reclaim(folio) && !(fgp_flags & FGP_DONTCACHE)) + folio_clear_reclaim(folio); return folio; } EXPORT_SYMBOL(__filemap_get_folio); @@ -2486,7 +2486,7 @@ static int filemap_create_folio(struct kiocb *iocb, struct folio_batch *fbatch) if (!folio) return -ENOMEM; if (iocb->ki_flags & IOCB_DONTCACHE) - __folio_set_dropbehind(folio); + __folio_set_reclaim(folio); /* * Protect against truncate / hole punch. Grabbing invalidate_lock @@ -2533,7 +2533,7 @@ static int filemap_readahead(struct kiocb *iocb, struct file *file, if (iocb->ki_flags & IOCB_NOIO) return -EAGAIN; if (iocb->ki_flags & IOCB_DONTCACHE) - ractl.dropbehind = 1; + ractl.reclaim = 1; page_cache_async_ra(&ractl, folio, last_index - folio->index); return 0; } @@ -2564,7 +2564,7 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count, if (iocb->ki_flags & IOCB_NOWAIT) flags = memalloc_noio_save(); if (iocb->ki_flags & IOCB_DONTCACHE) - ractl.dropbehind = 1; + ractl.reclaim = 1; page_cache_sync_ra(&ractl, last_index - index); if (iocb->ki_flags & IOCB_NOWAIT) memalloc_noio_restore(flags); @@ -2612,15 +2612,15 @@ static inline bool pos_same_folio(loff_t pos1, loff_t pos2, struct folio *folio) return (pos1 >> shift == pos2 >> shift); } -static void filemap_end_dropbehind_read(struct address_space *mapping, +static void filemap_end_reclaim_read(struct address_space *mapping, struct folio *folio) { - if (!folio_test_dropbehind(folio)) + if (!folio_test_reclaim(folio)) return; if (folio_test_writeback(folio) || folio_test_dirty(folio)) return; if (folio_trylock(folio)) { - if (folio_test_clear_dropbehind(folio)) + if (folio_test_clear_reclaim(folio)) folio_unmap_invalidate(mapping, folio, 0); folio_unlock(folio); } @@ -2742,7 +2742,7 @@ ssize_t filemap_read(struct kiocb *iocb, struct iov_iter *iter, for (i = 0; i < folio_batch_count(&fbatch); i++) { struct folio *folio = fbatch.folios[i]; - filemap_end_dropbehind_read(mapping, folio); + filemap_end_reclaim_read(mapping, folio); folio_put(folio); } folio_batch_init(&fbatch); diff --git a/mm/migrate.c b/mm/migrate.c index 19f913090aed..a4f6b9b5f745 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -683,8 +683,8 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) folio_set_dirty(newfolio); /* TODO: free the folio on migration? */ - if (folio_test_dropbehind(folio)) - folio_set_dropbehind(newfolio); + if (folio_test_reclaim(folio)) + folio_set_reclaim(newfolio); if (folio_test_young(folio)) folio_set_young(newfolio); diff --git a/mm/readahead.c b/mm/readahead.c index 220155a5c964..17b0f463a11a 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -185,8 +185,8 @@ static struct folio *ractl_alloc_folio(struct readahead_control *ractl, struct folio *folio; folio = filemap_alloc_folio(gfp_mask, order); - if (folio && ractl->dropbehind) - __folio_set_dropbehind(folio); + if (folio && ractl->reclaim) + __folio_set_reclaim(folio); return folio; } diff --git a/mm/swap.c b/mm/swap.c index 96892a0d2491..6250e21e1a73 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -406,7 +406,7 @@ static bool lru_gen_clear_refs(struct folio *folio) */ void folio_mark_accessed(struct folio *folio) { - if (folio_test_dropbehind(folio)) + if (folio_test_reclaim(folio)) return; if (lru_gen_enabled()) { lru_gen_inc_refs(folio); diff --git a/mm/truncate.c b/mm/truncate.c index 8efa4e325e54..e922ceb66c44 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -488,7 +488,7 @@ unsigned long mapping_try_invalidate(struct address_space *mapping, if (!ret) { if (!folio_test_unevictable(folio) && !folio_mapped(folio)) - folio_set_dropbehind(folio); + folio_set_reclaim(folio); /* Likely in the lru cache of a remote CPU */ if (nr_failed) diff --git a/mm/vmscan.c b/mm/vmscan.c index 4f86e020759e..b5d98f0c7d5b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -692,13 +692,13 @@ static pageout_t pageout(struct folio *folio, struct address_space *mapping, if (shmem_mapping(mapping) && folio_test_large(folio)) wbc.list = folio_list; - folio_set_dropbehind(folio); + folio_set_reclaim(folio); res = mapping->a_ops->writepage(&folio->page, &wbc); if (res < 0) handle_write_error(mapping, folio, res); if (res == AOP_WRITEPAGE_ACTIVATE) { - folio_clear_dropbehind(folio); + folio_clear_reclaim(folio); return PAGE_ACTIVATE; } @@ -1140,7 +1140,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * for immediate reclaim are making it to the end of * the LRU a second time. */ - if (writeback && folio_test_dropbehind(folio)) + if (writeback && folio_test_reclaim(folio)) stat->nr_congested += nr_pages; /* @@ -1149,7 +1149,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * * 1) If reclaim is encountering an excessive number * of folios under writeback and this folio has both - * the writeback and dropbehind flags set, then it + * the writeback and reclaim flags set, then it * indicates that folios are being queued for I/O but * are being recycled through the LRU before the I/O * can complete. Waiting on the folio itself risks an @@ -1173,7 +1173,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * would probably show more reasons. * * 3) Legacy memcg encounters a folio that already has the - * dropbehind flag set. memcg does not have any dirty folio + * reclaim flag set. memcg does not have any dirty folio * throttling so we could easily OOM just because too many * folios are in writeback and there is nothing else to * reclaim. Wait for the writeback to complete. @@ -1190,16 +1190,16 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (folio_test_writeback(folio)) { /* Case 1 above */ if (current_is_kswapd() && - folio_test_dropbehind(folio) && + folio_test_reclaim(folio) && test_bit(PGDAT_WRITEBACK, &pgdat->flags)) { stat->nr_immediate += nr_pages; goto activate_locked; /* Case 2 above */ } else if (writeback_throttling_sane(sc) || - !folio_test_dropbehind(folio) || + !folio_test_reclaim(folio) || !may_enter_fs(folio, sc->gfp_mask)) { - folio_set_dropbehind(folio); + folio_set_reclaim(folio); stat->nr_writeback += nr_pages; goto activate_locked; @@ -1231,7 +1231,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * Before reclaiming the folio, try to relocate * its contents to another node. */ - if (do_demote_pass && !folio_test_dropbehind(folio) && + if (do_demote_pass && !folio_test_reclaim(folio) && (thp_migration_supported() || !folio_test_large(folio))) { list_add(&folio->lru, &demote_folios); folio_unlock(folio); @@ -1354,7 +1354,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, */ if (folio_is_file_lru(folio) && (!current_is_kswapd() || - !folio_test_dropbehind(folio) || + !folio_test_reclaim(folio) || !test_bit(PGDAT_DIRTY, &pgdat->flags))) { /* * Immediately reclaim when written back. @@ -1364,7 +1364,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, */ node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE, nr_pages); - folio_set_dropbehind(folio); + folio_set_reclaim(folio); goto activate_locked; } diff --git a/mm/zswap.c b/mm/zswap.c index 611adf3d46a5..0825f5551567 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1103,7 +1103,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry, folio_mark_uptodate(folio); /* free the folio after writeback */ - folio_set_dropbehind(folio); + folio_set_reclaim(folio); /* start writeback */ __swap_writepage(folio, &wbc);