From patchwork Fri Aug 25 13:59:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13365797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5403C7EE2C for ; Fri, 25 Aug 2023 13:59:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 446212800BE; Fri, 25 Aug 2023 09:59:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D96D32800C0; Fri, 25 Aug 2023 09:59:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A02F2800C4; Fri, 25 Aug 2023 09:59:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 221132800BE for ; Fri, 25 Aug 2023 09:59:26 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id F1B97406CA for ; Fri, 25 Aug 2023 13:59:25 +0000 (UTC) X-FDA: 81162784290.14.2CCC443 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 6C24980019 for ; Fri, 25 Aug 2023 13:59:24 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=LjOiHNlm; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692971964; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=RPVtZdFrlBpavIT2Nzqkm414XVybQswEeJSPvRzyqis=; b=VjeKogNgDa1VkAr5yvmFfigMP7+p9mtEjPH53LZ8zQXzzO2NIgaMawgVjSDIuLxbD68dan J3A5gpUfATGTUveDLeSygd2syTPlgrQCjeLhTZFmpzy7SI2gfL9QnvjNBDipUD3nuiJCyI aDeKBsVExpp379vGuuQXLt0RvyA0sWQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692971964; a=rsa-sha256; cv=none; b=1e1ptlepHGuyQf9ULPn0PqWRO/2lneYQCMb+quBiDWnA/b7eZ9NKPKAk7BhgaP7/dUv3+z RqY3ynkQKuibqIeQG6BquL9rnsU2FSkwS4w8wc5CHNljHd5J/pqvXnY1Cz2zF6yr9F9yNG H2+4fh0qPKPPnKIwrzl8VUkJfcImdxA= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=LjOiHNlm; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:In-Reply-To:References; bh=RPVtZdFrlBpavIT2Nzqkm414XVybQswEeJSPvRzyqis=; b=LjOiHNlmBxS5HocuwQ0wayYGo5 n5mhcLr7lWT82HYNXQ8jabCkm6GeNA08mYC9XACv2yaKimufG8CWUS7nxTNQPJOK9MpdzDYzt47I8 jjYWEighf24rl/RwG4nVWv8mQry9ElJO9pc4+MFM6BZDO8sp0VgM3QewBDMGDN7MERS1Atd/0+FBV xcbWTct5GEa31r8jHFo0IDkjNPIjixIbQzW9sjQP7sfxMYXXLSDrZVR7JG12A8CFMB0deo77LPpUs ro1cu+6/wa7X3iVAayafx8v+rS9j5baAW+llK/WuQkBVdUWDaBPkh5vP6t2yuKIJBuDglSAuqVZHO 3QGB9rAQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZXLM-00HTQN-Qf; Fri, 25 Aug 2023 13:59:20 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [RFC PATCH 00/14] Rearrange batched folio freeing Date: Fri, 25 Aug 2023 14:59:04 +0100 Message-Id: <20230825135918.4164671-1-willy@infradead.org> X-Mailer: git-send-email 2.37.1 MIME-Version: 1.0 X-Stat-Signature: rjhixjn8owkzemuhe9nefq5usrex4yii X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 6C24980019 X-Rspam-User: X-HE-Tag: 1692971964-973840 X-HE-Meta: U2FsdGVkX1+9k1ckuQqgxkBc64Y0MtuazsZn5lcl/hgEQCdvPYsmpALsnrSH2nqMSfXeHpTRQuyMoaCGEZr9toavBoVNwuJlG5DAtNG8yIA9de4y1CAPfe52qnBCThMts8DcOOkufKzgTj5ymZ1FFHJeY+sGZ4HiWj8K0pdAvyZ4IjdyDqGsRRbHRer0k9QtmiB+Ofw4LrOf4zQbQsSxAlP7uod8oQeiOg9OCBz+0222JYMfeZ5UAkouHCMtp8Sg8QN7qSvmb/TbwNaaqqKcsjpdDiyuIdQRYgFMdaqINJAweS9RFzxQJwaOJnEmkS5MKaHXtCcFO4i7yyACFo95BfeZgfdj0O7Efi6IzjGF8i18njwE2B1HYIaGIXcZnod2GUdsEuAqqSv+3EOwqdzuLi/EQKh8uuYJBd2P10EtRctXJ3Uz3TNGiNxCz5aEYcujXg5cT/ea2OFIF1nixCsnaDgvU7ZGYTkNnWhPDvc3TTNwA8PeumpM2dVcm3VU/jGd0+4TYeDPpG18jQuO7eSLyq9uRMj+RX4GxPPdi1EoGpKTrePAOb8U0VIUgzAmfKUAdTfUBFTBM0izVHfzHgpqjoLFpV1JdosTDHwmpXi2VtE0rosefRWUxZm8A1JWGLmB2BljUc9htj1xBdhgNUjWbkaiFesw0+OQDuk1RpA4+qZatmLEBiFMSWcrYdKdLc6gjhj9Jdp7RT3Pg039HLpNrYFa3pYl76yJ/2Cwtu7MFhr3Y1iYnc7isSMabxudQ/1DLH7sg8jfFc4cakwfMk2JvwtBnkOKtwbNLd1PlSJjGFDq8+muUoskYA9efUAAbjLytfvMFz5TYLgUBVv8OOPmpMA1nxOSNByKMtIJV1sUtgxPchF3kHA5xdaWYVIBuTiskfIMZNchGmN0LQ9J9n6zAuGUBHdfQ8Z+4TLOLdZ5QRbwzyIGIwj1SzzE6LCSTEKkb0BAsO00afdXvVpey22 o/FKLJ6O i9aLVM2ZSyCzm5JlK3hpm5gnn5acpun2OGOGTWQxsr+GHJvQM6H7l1bZctBMk6DaXvP+lSSUdq882FKk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Other than the obvious "remove calls to compound_head" changes, the fundamental belief here is that iterating a linked list is much slower than iterating an array (5-15x slower in my testing). There's also an associated belief that since we iterate the batch of folios three times, we do better when the array is small (ie 15 entries) than we do with a batch that is hundreds of entries long, which only gives us the opportunity for the first pages to fall out of cache by the time we get to the end. The one place where that probably falls down is "Free folios in a batch in shrink_folio_list()" where we'll flush the TLB once per batch instead of at the end. That's going to take some benchmarking. Matthew Wilcox (Oracle) (14): mm: Make folios_put() the basis of release_pages() mm: Convert free_unref_page_list() to use folios mm: Add free_unref_folios() mm: Use folios_put() in __folio_batch_release() memcg: Add mem_cgroup_uncharge_folios() mm: Remove use of folio list from folios_put() mm: Use free_unref_folios() in put_pages_list() mm: use __page_cache_release() in folios_put() mm: Handle large folios in free_unref_folios() mm: Allow non-hugetlb large folios to be batch processed mm: Free folios in a batch in shrink_folio_list() mm: Free folios directly in move_folios_to_lru() memcg: Remove mem_cgroup_uncharge_list() mm: Remove free_unref_page_list() include/linux/memcontrol.h | 24 ++--- include/linux/mm.h | 19 +--- mm/internal.h | 4 +- mm/memcontrol.c | 16 ++-- mm/mlock.c | 3 +- mm/page_alloc.c | 74 ++++++++------- mm/swap.c | 180 ++++++++++++++++++++----------------- mm/vmscan.c | 51 +++++------ 8 files changed, 181 insertions(+), 190 deletions(-)