From patchwork Mon Dec 26 07:08:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13081666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68596C46467 for ; Mon, 26 Dec 2022 07:09:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B1ED7900004; Mon, 26 Dec 2022 02:08:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A806C900002; Mon, 26 Dec 2022 02:08:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8866F900004; Mon, 26 Dec 2022 02:08:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 7B2BE900002 for ; Mon, 26 Dec 2022 02:08:59 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 547271C5883 for ; Mon, 26 Dec 2022 07:08:59 +0000 (UTC) X-FDA: 80283580398.11.ABF0FBF Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) by imf25.hostedemail.com (Postfix) with ESMTP id A5E0AA0008 for ; Mon, 26 Dec 2022 07:08:57 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="fpdkSJF/"; spf=pass (imf25.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.52 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672038537; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WF0czWG0z2T4oq6TY2ZkLRE6dWmClGYr0kuRjGhgjlc=; b=63fDhrbsvrWc5/hiY+y8nK3fk5l74GjuAWkPJzSuGD3/feUNTLLrdX9AvsgkuYWGd5JQdE 2gPsoNGfXiBXWrzejPkaLcr844FIDcysTT6GPcFbio851p85ladvPKCz66FjeKCYLbDYyd TPCy1iITezmE0YyaSgxGskO8Z6zUJrg= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="fpdkSJF/"; spf=pass (imf25.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.52 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672038537; a=rsa-sha256; cv=none; b=mHl4JLfoKjIDKJoNuW5pcyT6mYLD2WZiK5bAdVCvvDD5lbNbOwngdnoMMh4dXWt83F0aWM FUbF5AeHiyfUyvHXwrMWnNSNvRWaRBR32ltmL2qqL3DSnwslbIeNVGgchj6pRIaKA6HHaC MzbjxSSz9ObNeE0Fjc4lUJGoonnP6RM= Received: by mail-wm1-f52.google.com with SMTP id p13-20020a05600c468d00b003cf8859ed1bso7116301wmo.1 for ; Sun, 25 Dec 2022 23:08:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WF0czWG0z2T4oq6TY2ZkLRE6dWmClGYr0kuRjGhgjlc=; b=fpdkSJF/JqUlKexT21mW8e+jSTKUNfkGzMkXrfOfL8HIFYx/Vq1hHyPBMjknJ2WtBK WrIcuPAHPepaz2AHevrqxKaipS445kAnZPURMVNYZHTNkuacttJ2UiXq3RvY2/Gt3Lyw bqgPnNwcChpmNMyk0XKDnTB16KN6xSc6zms8dljaclOqZVhCPVDsbRisayFao3u42G9V OVqutqKiT99JxTmxQaPT1PaWfI6Tg/hRRf1UYwL+F4dLe7ZBRYtVxulB8bGPlDUuIKMM 0qftQCz9/GBw2P5y4J4/TvoPqdAqIMJXl1s/DMbHk6YaN5YZ3Q30KzMr41ELzyYZuCpf K/tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WF0czWG0z2T4oq6TY2ZkLRE6dWmClGYr0kuRjGhgjlc=; b=TABdVyNTXzMj930sXjzi/8RE9LkKT8NJ6PaGkmQ11uIch3CBLFDemOeSJQd4fDyO5C rpHetqGcELDYWUyfpz6CtNMqvqkLPZGEyxy2Ix5X0wmyzjURUmXgBU3tCFtAdCacvuUr n+n8EBtvCIQbHYjIH14h1Mp5xyxbT35ubI38UV0KKCF16ksfDcZeH+oDWpJg9yMknB89 pF4MypOIAUZdaPsZ55hOSu/VKmBB7LuPZH0OaSg+DLKOBrwuAavGZVOqCnW6vItBBivK F8loKWaMcBZHlsDgAq/IcYOurwwZzZFRRuy984S6a24eQHOOWbhyvA+rVnrQO88StpLP YlKQ== X-Gm-Message-State: AFqh2koNc52wAn8VA9IFGLqrRbI4NOlUwELzC+Covk8eTaQECdH1r3Kr G5y863l4rXXih8gtu0eu7vHJ3zpus2U= X-Google-Smtp-Source: AMrXdXsSiUJBjB/FhDmSCuSkDKJgGEs8hWcATc7EdSFhVVbhZyX7hNNQ4+aMSZDrUlPWscxg5IgwEg== X-Received: by 2002:a05:600c:3589:b0:3d0:6c60:b4d1 with SMTP id p9-20020a05600c358900b003d06c60b4d1mr14506892wmq.6.1672038536088; Sun, 25 Dec 2022 23:08:56 -0800 (PST) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id e16-20020a05600c4e5000b003c21ba7d7d6sm13191456wmq.44.2022.12.25.23.08.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Dec 2022 23:08:55 -0800 (PST) From: Lorenzo Stoakes To: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Vlastimil Babka , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Lorenzo Stoakes Subject: [PATCH v2 1/4] mm: pagevec: add folio_batch_reinit() Date: Mon, 26 Dec 2022 07:08:44 +0000 Message-Id: X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: A5E0AA0008 X-Stat-Signature: gcnsa5bo1f87s5iyerkqasxwnzczowbk X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1672038537-236849 X-HE-Meta: U2FsdGVkX19xiF37NTbpoMResGwMUXViiILmbtbYtE4PAHZxJ/WYkXRCE10fOuyoR4k3BRGvGGypHaaim6oZRuyaSf/8P50voPr3073tK5gsnWF+SC/x8PbcJ80Sb6I7SBch05TTEd8/Yh4V2oyfH6+JOaIa8n48jgezCtnxk0VW93fhBhqasl9+zJpv2+QTEzjbOGmk/8VZfp37KiPrKmi13unJlan9XMomBw02EPNUhIKb9HSpyP2W2qD+sVLPUvyTnXoPWH+u8UD6Sioauo14hVfCgblHqyksTU77GcMak6zdnLxusnv9UdSJRne+GxU9IEtsNZi1fA3qxjlIvi4hscHY11tkjdlF/CeYNhepYRswNFb+2YsOsvwz3GQtCJd1Odd8Va4jhMAFWhDbUyUEkjSQNEN8pA5ne4Qa7K0Q6psXi8xYfVEMdovaFhyJsJWjdPF7H5D2m/7FIi38RCGsQK/Pc9b+T+3G2JUJQVv7UFCi+c9N6tYALxY+JIdfQLj9EuLhbsI8ZGH/KX5GivNFSy7YLEY1qbDQwdF+LdXesDgkdEhaSqaF39w3YZGvrA6IjZTmtMDyiBqLU38hSeobgshIC5NY+s0IRUnIV8ovp6CdpPUC0RJHxSiQDeFXcvSLSPfMyXpyPGnmSd4NECXWkYNaLwKB69p8Jd4YCEWXRQKJBPZx7k+mvtSKQEAAuxPSgv7f5ZwWYk/Pz2Hhl4JimMtu0L9AGft7RTJC2mjWOG/SmbNXYORWl5h1QcUah81SOFQ2vEqgNkKdNcUyRQkPSMP7mBiks6hYH3yJXdHcJcjfFk4laRGoohGYzrU15DOPoHZF5JAN6GLTiVIc/5dj3o2sM068HT+58qmYoZHY/3j/ht0V+5dFuyDPixsIkJMQjiyNQhfE7cctxWejA1+XO+RuYlb8mKyGuz8RsVdeXIKDmIPvtIg6OsPGhoapGl49rDOrewZWn2SBY7d gGhy9wxM zXBTusDwG9mzBwf/svx4IlXl1bvl3xR/a4tjGegljVqTICLGQrlgp63UMfGAEQSeSnxTSlYOJiDz7jY2jE4/vL8tH98IRo/0dGN/8Xg3oTu9fbCC4QgnKbZrIfFSNT+GEif3vTPOdpU1tkWpu1cyTRUEny1ZZZvDw7F95A1yiz4+74Luo8ezhb+L8MA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This performs the same task as pagevec_reinit(), only modifying a folio batch rather than a pagevec. Signed-off-by: Lorenzo Stoakes --- include/linux/pagevec.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 215eb6c3bdc9..2a6f61a0c10a 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -103,6 +103,11 @@ static inline void folio_batch_init(struct folio_batch *fbatch) fbatch->percpu_pvec_drained = false; } +static inline void folio_batch_reinit(struct folio_batch *fbatch) +{ + fbatch->nr = 0; +} + static inline unsigned int folio_batch_count(struct folio_batch *fbatch) { return fbatch->nr; From patchwork Mon Dec 26 07:08:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13081667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEA3CC4332F for ; Mon, 26 Dec 2022 07:09:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 37DD4900002; Mon, 26 Dec 2022 02:09:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 306F0940007; Mon, 26 Dec 2022 02:09:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E3BC900005; Mon, 26 Dec 2022 02:09:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id F2B87900002 for ; Mon, 26 Dec 2022 02:09:00 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B001C40112 for ; Mon, 26 Dec 2022 07:09:00 +0000 (UTC) X-FDA: 80283580440.30.57E80E1 Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) by imf22.hostedemail.com (Postfix) with ESMTP id 0C739C0009 for ; Mon, 26 Dec 2022 07:08:58 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=COIP09nz; spf=pass (imf22.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.52 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672038539; a=rsa-sha256; cv=none; b=hsQ3PVyHluT/yCnNuV21tE+bhfIKFkdTspvydZvWvaunWmyefqQoYNrbNnvjEIkJ+8G54D Z8GSk2fD3+X1BOhTAkA5OElcAqoAQeuga2Pun18FgtOJfeV6xfILe7lKmC67GUBIkaHRzc xzZANCzOyvcSGNBGxh7mr5agsm5B87E= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=COIP09nz; spf=pass (imf22.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.52 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672038539; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VLkOKeT4uIbBG6ss89Hd1tlt6NrDYEFm7lc6id39hK0=; b=Mo7UcP95SBTo+NXjTgsLfX1hCvfupe7Yw1cqYS+QGO0WqZv5XysjxkpzNLqMfwgF0fpB8j itwSsGqtD+2bntNCnerQxLWW6TOpW/RbnvYRu4Gf28fQllX+PdxJzlqtERell8yPiSXWm3 GrCiNdby4EW1oo98/0NWlhLmXn1+E2Y= Received: by mail-wm1-f52.google.com with SMTP id m8-20020a05600c3b0800b003d96f801c48so4522047wms.0 for ; Sun, 25 Dec 2022 23:08:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VLkOKeT4uIbBG6ss89Hd1tlt6NrDYEFm7lc6id39hK0=; b=COIP09nzy7+WUYXb67bPQBEcKY4WAGn4puhgDZEh04apbH19KfM9H1dsABye7DhxTw FcWexp+CZ1yPBvzB3gThuuvGnsUcxWJETv9ECDBq5SQueQBppS/y5bp8dLPVpHIRPh3u OzTvikrsG+tqvQVA47m3QWGzo51Z13zp17Q1VU1y11ZTdBZyVg2TBquWulrrm1vmSK4w cMxegFalPOk25gfuMt7Amed0iIr27PXL220rYePTzze2cak5eR7QaMygByv8H60iozZP of3oxLkhX2kimQnE0T7Emyam4xJhzo07UxWThIyyxJIzM2X/2MKnNbeGDvZxqOFL7Pze DKQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VLkOKeT4uIbBG6ss89Hd1tlt6NrDYEFm7lc6id39hK0=; b=REthbHQDK9qEsmRr4yZQd4M4oRu+SfkcyBBVzff5h4uXyzXmbbJBvKgGSYKLPycMlt isoCitRUN76q+m28Uzhxyacxb3WwqGLs/uGFtGZwvHRXfcd2d0ofYxhstovxqoHqYo+W r1pWe35k0D2eLmEJ9u8mq+vrTrI+haajMy12oScemvzFCBPLWgj8EaB973DhgQ18nczF gdk4kyYqMMbfMiJ/zuusIddz1jy4ieZO3Seb/7juzGdBROQbXJT3F2Cer3LArhQEpNuv eq7CMtb+h5cbubAKNaSkUhC3y+blYC6sCewzmbGEaz7cX/9gKPnWcuOxQDaqMI1bIbzk tT4g== X-Gm-Message-State: AFqh2kon6hiHAjcyFUr8YcP6PnkbFxV4OsRwO99kerpn590y79q3PNyj VDILM3O358olayIIGLieu1NNN1lWVm4= X-Google-Smtp-Source: AMrXdXvidvUPEzQDnVzwN9qFcNFQbsFT1dSF8jp/geRaOFD/4l6Jf0gcoX25KRTXIAC1rvlKHefHzw== X-Received: by 2002:a05:600c:3ba7:b0:3d3:4dac:aa69 with SMTP id n39-20020a05600c3ba700b003d34dacaa69mr12243687wms.36.1672038537308; Sun, 25 Dec 2022 23:08:57 -0800 (PST) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id e16-20020a05600c4e5000b003c21ba7d7d6sm13191456wmq.44.2022.12.25.23.08.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Dec 2022 23:08:56 -0800 (PST) From: Lorenzo Stoakes To: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Vlastimil Babka , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Lorenzo Stoakes Subject: [PATCH v2 2/4] mm: mlock: use folios and a folio batch internally Date: Mon, 26 Dec 2022 07:08:45 +0000 Message-Id: <03ac78b416be5a361b79464acc3da7f93b9c37e8.1672038314.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 0C739C0009 X-Rspamd-Server: rspam01 X-Stat-Signature: a61d5iw6z88b7n8s8cus771jd8noemcz X-HE-Tag: 1672038538-159320 X-HE-Meta: U2FsdGVkX18oRckll/c1ScDMrvim1Jifm4MFGKR87iDcNSPr0zQRbmwvy3kWdJslPkUOh2XmNB8Jy4W755FwSN+g8EHVzWB3ADYHPfcLhynAcpeFZGHT4GKasvuZWg3ujjTO+Cu+Ofa2dzUVwvk37tzFVX4R4kRuGzBvM4l3smVRtdJfaXDj0JMNxL2175JmrD2Auekz5Qp97r4VRg1j1cgiIwYb8Zk6gUIbvt2nZUowzSWrAvM+oVNPrdtTtaofUk0TIpqejhgk0K6UqQERnMV0G7DdSBPY2XmcCKr1mhwUONsCErY3HAvRZTlqhMAVZ6ZCEH3YZ9VvCm4ij0pjajlzZ16pREpZjyFoZ5waJbYanxt0lV6iJG0+ATXKxySoibjl6Xuc3F9DNCXtOU+7odj6SEJHzo/Mir0UELyAzBcTFLIdbJ9GE6rz5jis31lA03s9gWNH2Gogpk4sR7hQssTiTlsbEoK3vMLXyKkHGER12Bz0XmQK6KmtCsERB8+rzUrjbXU6n07EnmJ1Hmu+7zy80hhREPKc3yOyGempHgpAnsEhOIlsFN8IgfU15trZhif0t+DvVY3NJhwRjCIIxetngTU452Pf9btASn8W00AjXAIY8hKrFM8/GXFUNDba49fwS1mIRuPKOysxI+45NfYFzgwFz7WVU9Sl0YGuvdMQRgqH+5umyZbjEb1Z/6W0nUU1SRNDlq9ExxJqqpHkW9TqrLcmyuMwBAMIiwK5L4X2nwaEBG1792Usy9K4+LeOLWhfferzrZ1O/l2kHtWDBTvg/Az8j4UaRuHvmZ7XsyKr0osukeRM88bnoXqcw9L8Aiu/2DESPt4WILeg801noPcZxOKI2I1hsk77vVmNSAyidx1Kw8CJygVIacJTxYKAhZvyjFIdVDjDLMtqa4+yjvy9AsEuSAwGI6F+CeOt8EbX4f0ftAM4gCUa7A1Y1cmjBGajtJci+cb+nOiG6wk bVqrhM8W U1B06CDsvBXIvn5zVqWdtpGZhT8LNhyCedY0JEbbnsB6S3WAvtE3Y4lLBJ71MvIyzz4vP8YDGDrjktVzJ2Fd6CeVxbKatj48s2tDDF9fHVzVcAlG/NsNeTA4/6mIjmBzm5eEm X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This brings mlock in line with the folio batches declared in mm/swap.c and makes the code more consistent across the two. The existing mechanism for identifying which operation each folio in the batch is undergoing is maintained, i.e. using the lower 2 bits of the struct folio address (previously struct page address). This should continue to function correctly as folios remain at least system word-aligned. All invoctions of mlock() pass either a non-compound page or the head of a THP-compound page and no tail pages need updating so this functionality works with struct folios being used internally rather than struct pages. In this patch the external interface is kept identical to before in order to maintain separation between patches in the series, using a rather awkward conversion from struct page to struct folio in relevant functions. However, this maintenance of the existing interface is intended to be temporary - the next patch in the series will update the interfaces to accept folios directly. Signed-off-by: Lorenzo Stoakes --- mm/mlock.c | 238 +++++++++++++++++++++++++++-------------------------- 1 file changed, 120 insertions(+), 118 deletions(-) diff --git a/mm/mlock.c b/mm/mlock.c index 7032f6dd0ce1..e9ba47fe67ed 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -28,12 +28,12 @@ #include "internal.h" -struct mlock_pvec { +struct mlock_fbatch { local_lock_t lock; - struct pagevec vec; + struct folio_batch fbatch; }; -static DEFINE_PER_CPU(struct mlock_pvec, mlock_pvec) = { +static DEFINE_PER_CPU(struct mlock_fbatch, mlock_fbatch) = { .lock = INIT_LOCAL_LOCK(lock), }; @@ -48,192 +48,192 @@ bool can_do_mlock(void) EXPORT_SYMBOL(can_do_mlock); /* - * Mlocked pages are marked with PageMlocked() flag for efficient testing + * Mlocked folios are marked with the PG_mlocked flag for efficient testing * in vmscan and, possibly, the fault path; and to support semi-accurate * statistics. * - * An mlocked page [PageMlocked(page)] is unevictable. As such, it will - * be placed on the LRU "unevictable" list, rather than the [in]active lists. - * The unevictable list is an LRU sibling list to the [in]active lists. - * PageUnevictable is set to indicate the unevictable state. + * An mlocked folio [folio_test_mlocked(folio)] is unevictable. As such, it + * will be ostensibly placed on the LRU "unevictable" list (actually no such + * list exists), rather than the [in]active lists. PG_unevictable is set to + * indicate the unevictable state. */ -static struct lruvec *__mlock_page(struct page *page, struct lruvec *lruvec) +static struct lruvec *__mlock_folio(struct folio *folio, struct lruvec *lruvec) { /* There is nothing more we can do while it's off LRU */ - if (!TestClearPageLRU(page)) + if (!folio_test_clear_lru(folio)) return lruvec; - lruvec = folio_lruvec_relock_irq(page_folio(page), lruvec); + lruvec = folio_lruvec_relock_irq(folio, lruvec); - if (unlikely(page_evictable(page))) { + if (unlikely(folio_evictable(folio))) { /* - * This is a little surprising, but quite possible: - * PageMlocked must have got cleared already by another CPU. - * Could this page be on the Unevictable LRU? I'm not sure, - * but move it now if so. + * This is a little surprising, but quite possible: PG_mlocked + * must have got cleared already by another CPU. Could this + * folio be unevictable? I'm not sure, but move it now if so. */ - if (PageUnevictable(page)) { - del_page_from_lru_list(page, lruvec); - ClearPageUnevictable(page); - add_page_to_lru_list(page, lruvec); + if (folio_test_unevictable(folio)) { + lruvec_del_folio(lruvec, folio); + folio_clear_unevictable(folio); + lruvec_add_folio(lruvec, folio); + __count_vm_events(UNEVICTABLE_PGRESCUED, - thp_nr_pages(page)); + folio_nr_pages(folio)); } goto out; } - if (PageUnevictable(page)) { - if (PageMlocked(page)) - page->mlock_count++; + if (folio_test_unevictable(folio)) { + if (folio_test_mlocked(folio)) + folio->mlock_count++; goto out; } - del_page_from_lru_list(page, lruvec); - ClearPageActive(page); - SetPageUnevictable(page); - page->mlock_count = !!PageMlocked(page); - add_page_to_lru_list(page, lruvec); - __count_vm_events(UNEVICTABLE_PGCULLED, thp_nr_pages(page)); + lruvec_del_folio(lruvec, folio); + folio_clear_active(folio); + folio_set_unevictable(folio); + folio->mlock_count = !!folio_test_mlocked(folio); + lruvec_add_folio(lruvec, folio); + __count_vm_events(UNEVICTABLE_PGCULLED, folio_nr_pages(folio)); out: - SetPageLRU(page); + folio_set_lru(folio); return lruvec; } -static struct lruvec *__mlock_new_page(struct page *page, struct lruvec *lruvec) +static struct lruvec *__mlock_new_folio(struct folio *folio, struct lruvec *lruvec) { - VM_BUG_ON_PAGE(PageLRU(page), page); + VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); - lruvec = folio_lruvec_relock_irq(page_folio(page), lruvec); + lruvec = folio_lruvec_relock_irq(folio, lruvec); /* As above, this is a little surprising, but possible */ - if (unlikely(page_evictable(page))) + if (unlikely(folio_evictable(folio))) goto out; - SetPageUnevictable(page); - page->mlock_count = !!PageMlocked(page); - __count_vm_events(UNEVICTABLE_PGCULLED, thp_nr_pages(page)); + folio_set_unevictable(folio); + folio->mlock_count = !!folio_test_mlocked(folio); + __count_vm_events(UNEVICTABLE_PGCULLED, folio_nr_pages(folio)); out: - add_page_to_lru_list(page, lruvec); - SetPageLRU(page); + lruvec_add_folio(lruvec, folio); + folio_set_lru(folio); return lruvec; } -static struct lruvec *__munlock_page(struct page *page, struct lruvec *lruvec) +static struct lruvec *__munlock_folio(struct folio *folio, struct lruvec *lruvec) { - int nr_pages = thp_nr_pages(page); + int nr_pages = folio_nr_pages(folio); bool isolated = false; - if (!TestClearPageLRU(page)) + if (!folio_test_clear_lru(folio)) goto munlock; isolated = true; - lruvec = folio_lruvec_relock_irq(page_folio(page), lruvec); + lruvec = folio_lruvec_relock_irq(folio, lruvec); - if (PageUnevictable(page)) { + if (folio_test_unevictable(folio)) { /* Then mlock_count is maintained, but might undercount */ - if (page->mlock_count) - page->mlock_count--; - if (page->mlock_count) + if (folio->mlock_count) + folio->mlock_count--; + if (folio->mlock_count) goto out; } /* else assume that was the last mlock: reclaim will fix it if not */ munlock: - if (TestClearPageMlocked(page)) { - __mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages); - if (isolated || !PageUnevictable(page)) + if (folio_test_clear_mlocked(folio)) { + zone_stat_mod_folio(folio, NR_MLOCK, -nr_pages); + if (isolated || !folio_test_unevictable(folio)) __count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages); else __count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages); } - /* page_evictable() has to be checked *after* clearing Mlocked */ - if (isolated && PageUnevictable(page) && page_evictable(page)) { - del_page_from_lru_list(page, lruvec); - ClearPageUnevictable(page); - add_page_to_lru_list(page, lruvec); + /* folio_evictable() has to be checked *after* clearing Mlocked */ + if (isolated && folio_test_unevictable(folio) && folio_evictable(folio)) { + lruvec_del_folio(lruvec, folio); + folio_clear_unevictable(folio); + lruvec_add_folio(lruvec, folio); __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } out: if (isolated) - SetPageLRU(page); + folio_set_lru(folio); return lruvec; } /* - * Flags held in the low bits of a struct page pointer on the mlock_pvec. + * Flags held in the low bits of a struct folio pointer on the mlock_fbatch. */ #define LRU_PAGE 0x1 #define NEW_PAGE 0x2 -static inline struct page *mlock_lru(struct page *page) +static inline struct folio *mlock_lru(struct folio *folio) { - return (struct page *)((unsigned long)page + LRU_PAGE); + return (struct folio *)((unsigned long)folio + LRU_PAGE); } -static inline struct page *mlock_new(struct page *page) +static inline struct folio *mlock_new(struct folio *folio) { - return (struct page *)((unsigned long)page + NEW_PAGE); + return (struct folio *)((unsigned long)folio + NEW_PAGE); } /* - * mlock_pagevec() is derived from pagevec_lru_move_fn(): - * perhaps that can make use of such page pointer flags in future, - * but for now just keep it for mlock. We could use three separate - * pagevecs instead, but one feels better (munlocking a full pagevec - * does not need to drain mlocking pagevecs first). + * mlock_folio_batch() is derived from folio_batch_move_lru(): perhaps that can + * make use of such page pointer flags in future, but for now just keep it for + * mlock. We could use three separate folio batches instead, but one feels + * better (munlocking a full folio batch does not need to drain mlocking folio + * batches first). */ -static void mlock_pagevec(struct pagevec *pvec) +static void mlock_folio_batch(struct folio_batch *fbatch) { struct lruvec *lruvec = NULL; unsigned long mlock; - struct page *page; + struct folio *folio; int i; - for (i = 0; i < pagevec_count(pvec); i++) { - page = pvec->pages[i]; - mlock = (unsigned long)page & (LRU_PAGE | NEW_PAGE); - page = (struct page *)((unsigned long)page - mlock); - pvec->pages[i] = page; + for (i = 0; i < folio_batch_count(fbatch); i++) { + folio = fbatch->folios[i]; + mlock = (unsigned long)folio & (LRU_PAGE | NEW_PAGE); + folio = (struct folio *)((unsigned long)folio - mlock); + fbatch->folios[i] = folio; if (mlock & LRU_PAGE) - lruvec = __mlock_page(page, lruvec); + lruvec = __mlock_folio(folio, lruvec); else if (mlock & NEW_PAGE) - lruvec = __mlock_new_page(page, lruvec); + lruvec = __mlock_new_folio(folio, lruvec); else - lruvec = __munlock_page(page, lruvec); + lruvec = __munlock_folio(folio, lruvec); } if (lruvec) unlock_page_lruvec_irq(lruvec); - release_pages(pvec->pages, pvec->nr); - pagevec_reinit(pvec); + release_pages(fbatch->folios, fbatch->nr); + folio_batch_reinit(fbatch); } void mlock_page_drain_local(void) { - struct pagevec *pvec; + struct folio_batch *fbatch; - local_lock(&mlock_pvec.lock); - pvec = this_cpu_ptr(&mlock_pvec.vec); - if (pagevec_count(pvec)) - mlock_pagevec(pvec); - local_unlock(&mlock_pvec.lock); + local_lock(&mlock_fbatch.lock); + fbatch = this_cpu_ptr(&mlock_fbatch.fbatch); + if (folio_batch_count(fbatch)) + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); } void mlock_page_drain_remote(int cpu) { - struct pagevec *pvec; + struct folio_batch *fbatch; WARN_ON_ONCE(cpu_online(cpu)); - pvec = &per_cpu(mlock_pvec.vec, cpu); - if (pagevec_count(pvec)) - mlock_pagevec(pvec); + fbatch = &per_cpu(mlock_fbatch.fbatch, cpu); + if (folio_batch_count(fbatch)) + mlock_folio_batch(fbatch); } bool need_mlock_page_drain(int cpu) { - return pagevec_count(&per_cpu(mlock_pvec.vec, cpu)); + return folio_batch_count(&per_cpu(mlock_fbatch.fbatch, cpu)); } /** @@ -242,10 +242,10 @@ bool need_mlock_page_drain(int cpu) */ void mlock_folio(struct folio *folio) { - struct pagevec *pvec; + struct folio_batch *fbatch; - local_lock(&mlock_pvec.lock); - pvec = this_cpu_ptr(&mlock_pvec.vec); + local_lock(&mlock_fbatch.lock); + fbatch = this_cpu_ptr(&mlock_fbatch.fbatch); if (!folio_test_set_mlocked(folio)) { int nr_pages = folio_nr_pages(folio); @@ -255,10 +255,10 @@ void mlock_folio(struct folio *folio) } folio_get(folio); - if (!pagevec_add(pvec, mlock_lru(&folio->page)) || + if (!folio_batch_add(fbatch, mlock_lru(folio)) || folio_test_large(folio) || lru_cache_disabled()) - mlock_pagevec(pvec); - local_unlock(&mlock_pvec.lock); + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); } /** @@ -267,20 +267,22 @@ void mlock_folio(struct folio *folio) */ void mlock_new_page(struct page *page) { - struct pagevec *pvec; - int nr_pages = thp_nr_pages(page); + struct folio_batch *fbatch; + struct folio *folio = page_folio(page); + int nr_pages = folio_nr_pages(folio); - local_lock(&mlock_pvec.lock); - pvec = this_cpu_ptr(&mlock_pvec.vec); - SetPageMlocked(page); - mod_zone_page_state(page_zone(page), NR_MLOCK, nr_pages); + local_lock(&mlock_fbatch.lock); + fbatch = this_cpu_ptr(&mlock_fbatch.fbatch); + folio_set_mlocked(folio); + + zone_stat_mod_folio(folio, NR_MLOCK, nr_pages); __count_vm_events(UNEVICTABLE_PGMLOCKED, nr_pages); - get_page(page); - if (!pagevec_add(pvec, mlock_new(page)) || - PageHead(page) || lru_cache_disabled()) - mlock_pagevec(pvec); - local_unlock(&mlock_pvec.lock); + folio_get(folio); + if (!folio_batch_add(fbatch, mlock_new(folio)) || + folio_test_large(folio) || lru_cache_disabled()) + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); } /** @@ -289,20 +291,20 @@ void mlock_new_page(struct page *page) */ void munlock_page(struct page *page) { - struct pagevec *pvec; + struct folio_batch *fbatch; + struct folio *folio = page_folio(page); - local_lock(&mlock_pvec.lock); - pvec = this_cpu_ptr(&mlock_pvec.vec); + local_lock(&mlock_fbatch.lock); + fbatch = this_cpu_ptr(&mlock_fbatch.fbatch); /* - * TestClearPageMlocked(page) must be left to __munlock_page(), - * which will check whether the page is multiply mlocked. + * folio_test_clear_mlocked(folio) must be left to __munlock_folio(), + * which will check whether the folio is multiply mlocked. */ - - get_page(page); - if (!pagevec_add(pvec, page) || - PageHead(page) || lru_cache_disabled()) - mlock_pagevec(pvec); - local_unlock(&mlock_pvec.lock); + folio_get(folio); + if (!folio_batch_add(fbatch, folio) || + folio_test_large(folio) || lru_cache_disabled()) + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); } static int mlock_pte_range(pmd_t *pmd, unsigned long addr, From patchwork Mon Dec 26 07:08:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13081668 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB892C46467 for ; Mon, 26 Dec 2022 07:09:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 24810940008; Mon, 26 Dec 2022 02:09:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F699940007; Mon, 26 Dec 2022 02:09:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EEF92940008; Mon, 26 Dec 2022 02:09:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D8AAB940007 for ; Mon, 26 Dec 2022 02:09:01 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C2333402A4 for ; Mon, 26 Dec 2022 07:09:01 +0000 (UTC) X-FDA: 80283580482.25.6279FB2 Received: from mail-wm1-f53.google.com (mail-wm1-f53.google.com [209.85.128.53]) by imf08.hostedemail.com (Postfix) with ESMTP id 1FF59160002 for ; Mon, 26 Dec 2022 07:08:59 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=ky8mj0aM; spf=pass (imf08.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.53 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672038540; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PvP9A6XKAl/v1o7+nXXmkZgCax5Ofzh4VaB88gRKyTM=; b=uPk/rmL5Mbu1B+XFa35SSLU+Nwos4/8vFqJDh6f5GuBhWS3wWm+FkEpEMcQo8iDIkkdrxu 8obNZ/nSBtrJ0COpnMRZQau0u7lS3mB+2H7jWJYvVcD3MkGEGi7uwYSrFWFcvzXZWzlaT6 m11Y9a+MloqUV2hjTzyot3jYiYKHxTc= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=ky8mj0aM; spf=pass (imf08.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.53 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672038540; a=rsa-sha256; cv=none; b=PVZHrxHiHfAOIi4yGnFMR7IOLF1o3paAyAg7pmvZ97XWnzh1+D3yw+bDJ8zMPsKJwBzGRV TQG9XbEtK2Myq7wTDjWQA8DIfejRET/WCN+5LJ4u8tAaXxJOaTCqx9eeI0h5X/3A8uyVO8 sP2DrpuCfuEDuLBufNoBxc6eu7Ip/d4= Received: by mail-wm1-f53.google.com with SMTP id o15so7129590wmr.4 for ; Sun, 25 Dec 2022 23:08:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PvP9A6XKAl/v1o7+nXXmkZgCax5Ofzh4VaB88gRKyTM=; b=ky8mj0aMYqAH26amqbof509vhFcQfx/R9JD++pBJfS9LkQ5h7Ku50RfP+Qwr6uqUNV N6jemG0/tTfyufN/1FVlI2FBfR/4NQES9dCn1oBqKrt4hJDGUijI5JwJAsfPa6PZiHze 8taysu/+hfy9TDEcqWUKrxtRdn1IaAA5UBZjBgEErAA4K9QvRPdGwQ/THSI5w+g5Q28o H86M3tygigaBd1IuDGha4O5hp6AbLl0034NdnOvrY8o9ciDPToSU1t3kPp5zPgi9Nu18 7boxfKn+etohB+OD9Kz/HBTbZOmQeX/6V+/Bhvo7KUJ4VUWuzpRUDGrJk2mPHjZ61p9n +Ynw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PvP9A6XKAl/v1o7+nXXmkZgCax5Ofzh4VaB88gRKyTM=; b=ug+Fyby0YjQCropFgkwshVrZC/fzS4Z9KXtlPo5YayRgaLHoz4ugG4kkHY3GV7aD0f JIXTo1MEVp7fT2rvhf03wA+B3Dy5zDqY1xasExfeTXQQCWgTirUwLGyFlW/AaQ02gNh5 CYembjSB5llbtb+RWFUuF5DW4PUyg7vUIRZ2l8DV28vatpn7VS9IOIlZ7ZF7ILjef96e vHXwEa3qGs88STdnyjdTNp9A+R7kkRJMm5LhPgAvQgUjgv6wnW1Hf7Hr0EXomAOXpHJT Gvy0twzBawwCoc7EntamFVVkhkSHVza1hQL9l0rgKDgawZby/1rIRkIS8r2LAe8V+pJB VwGg== X-Gm-Message-State: AFqh2kpic9R1pK1r+xL2jVspr7m7VkMACv13lmhXku/fXoQIXioN6HeT cz+LLhNPpKjVW4mJVwVopOBdPKQ7SX0= X-Google-Smtp-Source: AMrXdXvP/oOJeqG/K19E0NDhr+3RSqXJeDQAjARpuQZuKzrvSV8HjWUvLNnrxYVZJF148QZamrvUcg== X-Received: by 2002:a05:600c:4f55:b0:3d3:5166:2da4 with SMTP id m21-20020a05600c4f5500b003d351662da4mr12246298wmq.8.1672038538554; Sun, 25 Dec 2022 23:08:58 -0800 (PST) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id e16-20020a05600c4e5000b003c21ba7d7d6sm13191456wmq.44.2022.12.25.23.08.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Dec 2022 23:08:57 -0800 (PST) From: Lorenzo Stoakes To: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Vlastimil Babka , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Lorenzo Stoakes Subject: [PATCH v2 3/4] mm: mlock: update the interface to use folios Date: Mon, 26 Dec 2022 07:08:46 +0000 Message-Id: <555c36b91c4b34a5972f2614395e3c3831e8102f.1672038314.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 1FF59160002 X-Stat-Signature: 1qcehux8jcad8g6g5cxt7ynojpsse39j X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1672038539-578366 X-HE-Meta: U2FsdGVkX1+nGqbc5isNL901dXEB9y8WyNdInlZrC81SHoL8khDAYGrQ4MDanvNkg7EUCXJdVTShe8FX2KKMxQtJ1X+UwaPhGg10gaVeeh6yUEHSRDFM+KIR2V0gt0O26EsRIPuAhMayFecVtvt3DyiT9HsRJUEVFi7ybvZlj+O/Tbaa23liOX5zaSJoS+KKaGwOBKafN/rIDa0sZFX6LdiIgRKbDfVql5TtY+rUqByKTGNeXUwizbsJ3pVKf8nGfRjThhFisVQaWuJzoLpwuDirftUBhVQSEixeoeApsKGp71DjWyCoVyn2Xh7UY/QjF4RpFa9yJPkuBndqT8/fw+/yVvWDauQlAyOKdhEKxa2BTxfENjqpeTwogLSBEZFYdDYkJvn5jpJCcTFXv9wmdIslmoaneC4CuH1wCRrHOHmWXJqwGXEtOCnIq14M2j9RAXtQWy8UeGj5KpwXyk0ond/b/2PPkgou/yyu6uIaCYMFq9IaKS/KJmvS2EW/r0aDXgYCjmI/pZxp3cr1q8079hgAAE38SlJcLfddp8d99UWeIeq82Us0TDOACTsisBbroFWK+rY8kBKQ1rNTot6yeT1Yc5v6W+lGyFv+A60HNFlJZ3flq4LaMUZJd+Xwx/oXlvV4dG6yBfexfp63H7+KgSfXmqp3m8mJl4wDPQjVVNUWPqZUsu/KFdSgBW/ZvDCfiFG7RWfDVMNLbSnGEH/02FBYLPqsIXd4JaXEEypN8BwRgFVYS+Ng0Cvq+Fp4IH/tg9eXophvqbXxMlb6PxU84DzrCjrIXgBysOgQRmDHw3C0m9gwNQnpK9eWxueOkF8IMI6BFn54Ev+w0jEw3FNo/dU8Ylr19CzXGPG2uRsZ9J7fXSq2exI/tuERFOGpkzGmLmX5Z9NpxVTSTCIOrEC1iRMlKuMpD0H0hYIzNJYxAAEv4mdxMdcgEbEWgshN1VEOQeABEghGj+ygcXT1btX ZUxNUZiD uENDv3q0BJSe101+gD0aiHG/VpYZ/C8xFu0H/9U8t3Q8rpJoiuu2OG7PwHAcjZc7eIySQvg9fA6Ou+3KeN9eOcC9CFKdUcfvdEC1SRGNoxl/IQ7jIFWQPBO+V5D254gpTXGAB X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch updates the mlock interface to accept folios rather than pages, bringing the interface in line with the internal implementation. munlock_vma_page() still requires a page_folio() conversion, however this is consistent with the existent mlock_vma_page() implementation and a product of rmap still dealing in pages rather than folios. Signed-off-by: Lorenzo Stoakes --- mm/internal.h | 26 ++++++++++++++++---------- mm/mlock.c | 32 +++++++++++++++----------------- mm/swap.c | 2 +- 3 files changed, 32 insertions(+), 28 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 1d6f4e168510..8a6e83315369 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -515,10 +515,9 @@ extern int mlock_future_check(struct mm_struct *mm, unsigned long flags, * should be called with vma's mmap_lock held for read or write, * under page table lock for the pte/pmd being added or removed. * - * mlock is usually called at the end of page_add_*_rmap(), - * munlock at the end of page_remove_rmap(); but new anon - * pages are managed by lru_cache_add_inactive_or_unevictable() - * calling mlock_new_page(). + * mlock is usually called at the end of page_add_*_rmap(), munlock at + * the end of page_remove_rmap(); but new anon folios are managed by + * folio_add_lru_vma() calling mlock_new_folio(). * * @compound is used to include pmd mappings of THPs, but filter out * pte mappings of THPs, which cannot be consistently counted: a pte @@ -547,15 +546,22 @@ static inline void mlock_vma_page(struct page *page, mlock_vma_folio(page_folio(page), vma, compound); } -void munlock_page(struct page *page); -static inline void munlock_vma_page(struct page *page, +void munlock_folio(struct folio *folio); + +static inline void munlock_vma_folio(struct folio *folio, struct vm_area_struct *vma, bool compound) { if (unlikely(vma->vm_flags & VM_LOCKED) && - (compound || !PageTransCompound(page))) - munlock_page(page); + (compound || !folio_test_large(folio))) + munlock_folio(folio); +} + +static inline void munlock_vma_page(struct page *page, + struct vm_area_struct *vma, bool compound) +{ + munlock_vma_folio(page_folio(page), vma, compound); } -void mlock_new_page(struct page *page); +void mlock_new_folio(struct folio *folio); bool need_mlock_page_drain(int cpu); void mlock_page_drain_local(void); void mlock_page_drain_remote(int cpu); @@ -647,7 +653,7 @@ static inline void mlock_vma_page(struct page *page, struct vm_area_struct *vma, bool compound) { } static inline void munlock_vma_page(struct page *page, struct vm_area_struct *vma, bool compound) { } -static inline void mlock_new_page(struct page *page) { } +static inline void mlock_new_folio(struct folio *folio) { } static inline bool need_mlock_page_drain(int cpu) { return false; } static inline void mlock_page_drain_local(void) { } static inline void mlock_page_drain_remote(int cpu) { } diff --git a/mm/mlock.c b/mm/mlock.c index e9ba47fe67ed..0317b33c727f 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -262,13 +262,12 @@ void mlock_folio(struct folio *folio) } /** - * mlock_new_page - mlock a newly allocated page not yet on LRU - * @page: page to be mlocked, either a normal page or a THP head. + * mlock_new_folio - mlock a newly allocated folio not yet on LRU + * @folio: folio to be mlocked, either normal or a THP head. */ -void mlock_new_page(struct page *page) +void mlock_new_folio(struct folio *folio) { struct folio_batch *fbatch; - struct folio *folio = page_folio(page); int nr_pages = folio_nr_pages(folio); local_lock(&mlock_fbatch.lock); @@ -286,13 +285,12 @@ void mlock_new_page(struct page *page) } /** - * munlock_page - munlock a page - * @page: page to be munlocked, either a normal page or a THP head. + * munlock_folio - munlock a folio + * @folio: folio to be munlocked, either normal or a THP head. */ -void munlock_page(struct page *page) +void munlock_folio(struct folio *folio) { struct folio_batch *fbatch; - struct folio *folio = page_folio(page); local_lock(&mlock_fbatch.lock); fbatch = this_cpu_ptr(&mlock_fbatch.fbatch); @@ -314,7 +312,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, struct vm_area_struct *vma = walk->vma; spinlock_t *ptl; pte_t *start_pte, *pte; - struct page *page; + struct folio *folio; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { @@ -322,11 +320,11 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, goto out; if (is_huge_zero_pmd(*pmd)) goto out; - page = pmd_page(*pmd); + folio = page_folio((struct page *)pmd_page(*pmd)); if (vma->vm_flags & VM_LOCKED) - mlock_folio(page_folio(page)); + mlock_folio(folio); else - munlock_page(page); + munlock_folio(folio); goto out; } @@ -334,15 +332,15 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, for (pte = start_pte; addr != end; pte++, addr += PAGE_SIZE) { if (!pte_present(*pte)) continue; - page = vm_normal_page(vma, addr, *pte); - if (!page || is_zone_device_page(page)) + folio = vm_normal_folio(vma, addr, *pte); + if (!folio || folio_is_zone_device(folio)) continue; - if (PageTransCompound(page)) + if (folio_test_large(folio)) continue; if (vma->vm_flags & VM_LOCKED) - mlock_folio(page_folio(page)); + mlock_folio(folio); else - munlock_page(page); + munlock_folio(folio); } pte_unmap(start_pte); out: diff --git a/mm/swap.c b/mm/swap.c index e54e2a252e27..7df297b143f9 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -562,7 +562,7 @@ void folio_add_lru_vma(struct folio *folio, struct vm_area_struct *vma) VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); if (unlikely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) == VM_LOCKED)) - mlock_new_page(&folio->page); + mlock_new_folio(folio); else folio_add_lru(folio); } From patchwork Mon Dec 26 07:08:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13081669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA21AC4332F for ; Mon, 26 Dec 2022 07:09:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 04CCD940009; Mon, 26 Dec 2022 02:09:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F3EE3940007; Mon, 26 Dec 2022 02:09:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E03D4940009; Mon, 26 Dec 2022 02:09:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D05ED940007 for ; Mon, 26 Dec 2022 02:09:03 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 4CC6D1401C5 for ; Mon, 26 Dec 2022 07:09:03 +0000 (UTC) X-FDA: 80283580566.14.2409974 Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) by imf17.hostedemail.com (Postfix) with ESMTP id 88BAD40007 for ; Mon, 26 Dec 2022 07:09:01 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=F4i0E9Wu; spf=pass (imf17.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.41 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672038541; a=rsa-sha256; cv=none; b=pxEG3xZlvY/FhPHSRUMH9WJxt0KHctYt1tD2CqpE6vk9Nsu+BNvyoPPD+ss1QNRLaQOd8R +UBUts5VTC0QOM+b4KqrEuinhgma/FiTa8uyzBJRD6m1IRbth9Po3MGsmPHgroObXnxDzb Kwsx14ytIJCN5WD2GUgB/2xFilpZGSM= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=F4i0E9Wu; spf=pass (imf17.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.41 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672038541; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=r2F81c9AgLyck1wSShCjUojXRC7u44cFLH6IR6oWmN4=; b=sT6s1GfRSDxYEXFOs/iVs7IHXMIwZl6EomW5Gc/emspIC0itC280QUtJgeLC76QkYBRZ1Y 1PZbz3dbS6eGMS+dXM3cJX7JYYUbwTJIVYOyeEDUMScKJAKR28IoA1obrYrJfajG+2GRMJ mWoo24Yd/AydmCxkHxuTOjkHOh+rbQc= Received: by mail-wm1-f41.google.com with SMTP id ay2-20020a05600c1e0200b003d22e3e796dso7129251wmb.0 for ; Sun, 25 Dec 2022 23:09:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r2F81c9AgLyck1wSShCjUojXRC7u44cFLH6IR6oWmN4=; b=F4i0E9Wu0NYud28tdJXiEWfYSh10GD22IJXdzIrA/XakhY5Iy9YwrPJDCCKplhFIJM UpKqvP2hYOs4H2MRo+xIrANw3FVpLZB3h1Zs2ur8QtHr4IOhP9pD/eMWEgPjUFr/oC2e NiQ6eHvO8KfOOv2KdvU44EiLEpNwn1Bx698xA316j4lVjoXs9QuraRsCpkJx/TmYyjKL b5qKRX5nyf2FaaPrbtRz/Gl64Pfc3BGxJyU0JncF5xHxdqGlfhGMBpAF4CmeODFz8nRB Ub8b07H0VeAnLMipDxeLl7XNrk4T45Qqcf7J/nbXJs8CpDlxBlRlFDgt8pwbT+X3XJvP kl3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r2F81c9AgLyck1wSShCjUojXRC7u44cFLH6IR6oWmN4=; b=62Van5Nuke/x5zoYYkc8xHPCK2zarJ+cNpIY/yxi3VO/ltSbbyvPpdXmLzgfZXMQ4O 6qyA6Zx0dGkH3l9isH/DerUsniz8ordcGq7v1ur/rMJJkYIipH70U/DNT2HNhicH1GFj 48MFUVdIjtENj6fpj6S4DMe+ecUfNhrEaMapoftu3kvk5wrox/LRGoDYveFtnbq2xJ04 2m5Cr4Z708A5D/NgoGEGyZMIKZ5JImfMyZhVvWhyatMXcWL4JLP5/qZ4X84Ytc/I+EAt 460rVdmcYKY3O/iVV/fJAUC4Dloc+0OLMMcysTJEcRWCmOmDtjrZyLF39ZBxt8ZD+cwS PkgA== X-Gm-Message-State: AFqh2kpl/D9FIFMo+6IaQJCamDUzfds6l6xIVaLXujUW+rVRTuyZWokq MbD+bPMBBVxum9sFpS3GIrcANIGj4tA= X-Google-Smtp-Source: AMrXdXvZCSVjc3Yf5QlYu7hAnys5nckvEYVgPo1bv5CypB6+nBC4XDwiD7UyHjoPk9yO9Qq54Pid/g== X-Received: by 2002:a05:600c:378c:b0:3d2:1f10:3318 with SMTP id o12-20020a05600c378c00b003d21f103318mr15193927wmr.31.1672038539911; Sun, 25 Dec 2022 23:08:59 -0800 (PST) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id e16-20020a05600c4e5000b003c21ba7d7d6sm13191456wmq.44.2022.12.25.23.08.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Dec 2022 23:08:59 -0800 (PST) From: Lorenzo Stoakes To: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Vlastimil Babka , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Lorenzo Stoakes Subject: [PATCH v2 4/4] Documentation/mm: Update references to __m[un]lock_page() to *_folio() Date: Mon, 26 Dec 2022 07:08:47 +0000 Message-Id: <54006f75cb3c03b98e5a3d0968294db8c6889089.1672038314.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 88BAD40007 X-Rspamd-Server: rspam01 X-Stat-Signature: qi73f4hagtwifhhijz1zreogxcrou5mr X-HE-Tag: 1672038541-315537 X-HE-Meta: U2FsdGVkX19z9/EmD4grkpPyZupjqmafXAYPiPoApfhZ1VWeEQeXNe6wIqDN/euuoaMVpQgWIXb9mq5i8Pr8EQWnx6KS23+3JEaevaSQa5Qn6jTMmsJ5QRuF5Uqaa2VFA7bdhj72l5wKshjJVY+7oAkIibfOH7CX0dD7a3nOUVEjkiOPK19MCi+BNk1kCa6bDc4l/aKHo4/U74XCQ2f2/evnwRUSYSaVncLi95dMIhXvca5X1BFku7TQEUqJccYG8+1WtyZ/cGMMHlthwzfHYDSL/euS5cD7Tbvm6s67MsegaJ/icAt2gaDqFpTcAy4YrePDV7PvOW1o797YMl43k0k0QIkzGJbDR877xhM1uyZ4+f5DqbWqwT813b/Kn+aSCX3gvgIT/ksEm/mLkuFSfuNC1PuEsqjj14cekqmFOmj0f5Ig0TgiswoFLvyJThG4B/Qj4gd94IXUtl4xFM/ROnQ1+OQZZ9/2piW/Rx8bJzQfG604RbjWjcIWfo+F7SBoUfS1jeqWl+sR60pHraQDjJOWJfCmnqeVtxMhrDooFFWubqwWMx37//xS+xCXPjYEowxcK4QyN4MKdbhZtCdhyNx/AilPQgA+Hl6Q6CFJ1gUKuRfISTur0s7hY0Y1wpqY37QfnwTOe1VuwXp12VFD/iJn+T+TK6oS5OpcKjNHf+clEPfO2ywWSL53FCpVabsYd0VMqOYzBFJhYggCO4IpX28ckwOlhd4PJxjkcdA5ijHtLYfSUZvwDAAUw8GjAk7A8gBcq130j4/9n0212E4zeiv3ss5/brIb9YO+7//NbN3od+7UBe1mrs1llbXOIdSOQ/MbteRvX+2hQcKx4tSJpfZVKZV6CDtxNAQKYENlcYqjyeMYD/h3yoErzLLdP6BURVMti7/bh/ukqQ8qDwugMp3OugGoLPpsWPMkgzwOhIcApHc2wrK82y+zfB4tGvKuojdDUJAhsqYqZeiHA3J qx5R3RRC RpBlijVRsOyP+RicwauH8edjs35MLx+x1RI29cyZ7icokaqEuhsYZ5BBeGrijKfHhLotKu6ImesMAx7ktBCZ9swImaHQ1aTCZdPmjmLfsVkzQfg0/b7kvl3shsdxdpyqAGRU811ycGm/Sbae7FC9akhVujli13ScW9LunOBRobbevGskvyVIyvGrhCQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We now pass folios to these functions, so update the documentation accordingly. Additionally, correct the outdated reference to __pagevec_lru_add_fn(), the referenced action occurs in __munlock_folio() directly now. Signed-off-by: Lorenzo Stoakes --- Documentation/mm/unevictable-lru.rst | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst index 4a0e158aa9ce..153629e0c100 100644 --- a/Documentation/mm/unevictable-lru.rst +++ b/Documentation/mm/unevictable-lru.rst @@ -308,22 +308,22 @@ do end up getting faulted into this VM_LOCKED VMA, they will be handled in the fault path - which is also how mlock2()'s MLOCK_ONFAULT areas are handled. For each PTE (or PMD) being faulted into a VMA, the page add rmap function -calls mlock_vma_page(), which calls mlock_page() when the VMA is VM_LOCKED +calls mlock_vma_page(), which calls mlock_folio() when the VMA is VM_LOCKED (unless it is a PTE mapping of a part of a transparent huge page). Or when it is a newly allocated anonymous page, lru_cache_add_inactive_or_unevictable() -calls mlock_new_page() instead: similar to mlock_page(), but can make better +calls mlock_new_folio() instead: similar to mlock_folio(), but can make better judgments, since this page is held exclusively and known not to be on LRU yet. -mlock_page() sets PageMlocked immediately, then places the page on the CPU's -mlock pagevec, to batch up the rest of the work to be done under lru_lock by -__mlock_page(). __mlock_page() sets PageUnevictable, initializes mlock_count +mlock_folio() sets PageMlocked immediately, then places the page on the CPU's +mlock folio batch, to batch up the rest of the work to be done under lru_lock by +__mlock_folio(). __mlock_folio() sets PageUnevictable, initializes mlock_count and moves the page to unevictable state ("the unevictable LRU", but with mlock_count in place of LRU threading). Or if the page was already PageLRU and PageUnevictable and PageMlocked, it simply increments the mlock_count. But in practice that may not work ideally: the page may not yet be on an LRU, or it may have been temporarily isolated from LRU. In such cases the mlock_count -field cannot be touched, but will be set to 0 later when __pagevec_lru_add_fn() +field cannot be touched, but will be set to 0 later when __munlock_folio() returns the page to "LRU". Races prohibit mlock_count from being set to 1 then: rather than risk stranding a page indefinitely as unevictable, always err with mlock_count on the low side, so that when munlocked the page will be rescued to