From patchwork Mon Feb 24 16:55:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13988499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 426D2C021A4 for ; Mon, 24 Feb 2025 16:56:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E7196B0096; Mon, 24 Feb 2025 11:56:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 375E76B0098; Mon, 24 Feb 2025 11:56:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 14AEE280001; Mon, 24 Feb 2025 11:56:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C94F86B0096 for ; Mon, 24 Feb 2025 11:56:24 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 35B55C183C for ; Mon, 24 Feb 2025 16:56:24 +0000 (UTC) X-FDA: 83155441488.16.4600D63 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf23.hostedemail.com (Postfix) with ESMTP id 7D918140020 for ; Mon, 24 Feb 2025 16:56:21 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YyW74i2v; spf=pass (imf23.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740416181; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JowchFKNsu8Ay0dQlyPFoFvu2rxlXy5D20/ejxoZdUQ=; b=sisWeQZZ08GAuopx/IUiLXfJzORAzzXNv9Qw9EXOhlnAIWWi+00oTGU50JZqNLicEmG5a3 kQdhLaxadI4vbXfr+4Tl9K0qMvXzPrq3PNYTAiHMSDgdTj+g3ZJ6DzpLk4p4qXgNeC8fCm 2WqVSwmeYo4EQqhjFzSW8qwW0g7zPsg= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YyW74i2v; spf=pass (imf23.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740416181; a=rsa-sha256; cv=none; b=x7nP9xwJ2a6dtQCmr/UTeAjIt5XTnXeGw5c+XoEHdlXFFYYMb/0ShSZiMcu+qNef1ZFjEd YCbNfgWQCwQxACF7DsJQaLMjwTPTzy40URJ7bOOuGsWWWVTYWnIvWETx+d6n1Q0aDctA02 SdjC1ZzCRNb9ue4oL3bEAn4lLNcRT18= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1740416180; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JowchFKNsu8Ay0dQlyPFoFvu2rxlXy5D20/ejxoZdUQ=; b=YyW74i2vQogoWbjmYljrA7rBG43uw/jzeoHagACtzlsch3oQ7EjHx3IqAShoxfAGkwq0vQ pcQIznnOu52NdaOuBrVy6eysdgzltLefZKAQkB3EVM8b6Sj9FlxTko1C7ge0ekCxxh1CIH SMVBOU/3SRt4kpXphdDhNHU+jEob3GU= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-261-ttA0LvF0PoSA5mKDrjWMkQ-1; Mon, 24 Feb 2025 11:56:19 -0500 X-MC-Unique: ttA0LvF0PoSA5mKDrjWMkQ-1 X-Mimecast-MFC-AGG-ID: ttA0LvF0PoSA5mKDrjWMkQ_1740416178 Received: by mail-wr1-f70.google.com with SMTP id ffacd0b85a97d-38f4bf0d5faso2472195f8f.0 for ; Mon, 24 Feb 2025 08:56:19 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740416178; x=1741020978; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JowchFKNsu8Ay0dQlyPFoFvu2rxlXy5D20/ejxoZdUQ=; b=E4BjraOZ04k85FxRQUDiWOw+nB9o+pMUrsBIrkZ/8+5U6bZJE8mZCvjiq1m9mf/HCH IMfXy1Ky4kOev7obr/70uuief9/Xq/D0Dv403kh1e/Ye8l5ws94zUZ4LKEfvlMdVJyPp E4XvsgF/Fqj9HFu/rNovy4OQajZnKNPIrgz14bw3t5PfuqgGg7gTtzTs+l4mxax6qCSJ szijQZo2iUZ0ZD0EV4pzylmB+w3UhoCXpW5XKdXGKbRrr35IyUJCNPgBFnZdNDUG62Kp iSQK9AuC8KWjv7HbtgoWjR5p/wdWw4lfTD89AKcIHDGeVe1DTqRxl4NfLYfdAyVLTY0H Igvw== X-Forwarded-Encrypted: i=1; AJvYcCVF2RTFXQMcqcWfRWVEUfgfkUPYT4xXGSmBa1Xett/NZy7PCsKo52b7ItG+wtLpcKv0iWpZ8PriDg==@kvack.org X-Gm-Message-State: AOJu0YxNVz+lJYRNotN33JbPA/fSZJuuktSM7y528/6xriKrit8RE/8m QQkCYGQH2KPwIMt9tK6dKmWwNIGB9AllQR/HDjh5OZ101bfYl+bFt7f4iKLMkm6FgWT+hgUnjNu zfRNnuz0o/PVo+XMRN2TPAMAfHFXusPpFe1rhSPuibu20FjzO X-Gm-Gg: ASbGncsBAhJf1lvj1lwGOB1IpixkIihdtJyB5piuwZ/M596LhKEppwRSTtK+KpioZ3M fsfI1s3ye/4Anpdo2aVZaCdLV+EUFLvtTz6/sj/2HGj63n/rJUwElIzfaLb3n5CURltOH4JRDGN exprSZoxGXsuOFJMgDu6uqGIleDEFt8tECsAb8jwGWsRLT16SlJTiUPPQM6Uy1ja3Ixu3Fh8RNG 4fwaWCen7gw+Aths/s1btt5qx6vYO2w8Vq7lR/TskSvwESlnovjiOmfawbXAHcKWO4RI1DW6GIV tJZBvh0XyijV/w+Cf0iv/UfN3IfNugfTxM+qQhFSNg== X-Received: by 2002:a5d:47a3:0:b0:38d:d8fb:e90f with SMTP id ffacd0b85a97d-38f6e975ca7mr11045687f8f.24.1740416178105; Mon, 24 Feb 2025 08:56:18 -0800 (PST) X-Google-Smtp-Source: AGHT+IEppyEZ/CiRop1NWtbK9zceuiQ/u5I/zpQhUebwrOc1yf1KZs6E/F1s5c1KapHNpyvpSRQcvA== X-Received: by 2002:a5d:47a3:0:b0:38d:d8fb:e90f with SMTP id ffacd0b85a97d-38f6e975ca7mr11045649f8f.24.1740416177739; Mon, 24 Feb 2025 08:56:17 -0800 (PST) Received: from localhost (p4ff234b6.dip0.t-ipconnect.de. [79.242.52.182]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-38f258f5fb6sm31629683f8f.44.2025.02.24.08.56.16 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 24 Feb 2025 08:56:17 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Muchun Song , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn Subject: [PATCH v2 06/20] mm: move _entire_mapcount in folio to page[2] on 32bit Date: Mon, 24 Feb 2025 17:55:48 +0100 Message-ID: <20250224165603.1434404-7-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250224165603.1434404-1-david@redhat.com> References: <20250224165603.1434404-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: PIV85dd0NfopDVD1xma1_UZoAlb4vD5-YNGnF5QODGE_1740416178 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7D918140020 X-Stat-Signature: 4znno5p65n8i3im6aojeo5m7ww1emrcu X-Rspam-User: X-HE-Tag: 1740416181-362999 X-HE-Meta: U2FsdGVkX19SBm/TS7fH6BVYW/wF89L1Q4aDn7TtKZr5wZzKxNPSq8IO/zuYf9vuR4tUcvwe/88EVXN2PCdMEoHilH1ctvIBk1M8UNK4QP1cEVG0nyHZti8VOCyM13UBK0EDC6nlYdE0tqy67tQObyNVzBcnQoJFSAw1ZaH0K411Mv0aK+WSH+Q/00s+CCxs01vLZITlkyZ4r9i3X/C46BhfMMKiH8nmKcutd1hezM5rhr1Nyu4Jlko0+mabQsDFCgauZqbUlfiKcEhpm5xBBBdP+1SFSrGP+WPl7wVRQks6MY+98yhFqYXSMgJQzZyxKRdyWaSEkXhK7f0RyB7mcSQz8Y08SY5RIKfblcqPdIX4ndkGVeQwLSspE+4rM/tZfrLgancGzPOnyTTb4rDhZzVV/A+eipdX69x4beP/f/+Vjlc6Fcy1DtXn0kGF2RaQf3tQOGCKv8Ipl0Y6GkB6AbUXOSo3vrWNOqB5CGIU2Ua+No/wVGW622UiMvkb+76RR4PsXRJFw9dmIRniRbMI3073273BXVN5ACWQr5cavJ8mf9wSWgSeyJAi+PieHdBpDAHGS8IJQt/ZIs13YE5Sa5hQ2sFECWq1VWAyujJR9eZ169Bf7h1BoNSYf6T+NNDWLLZZdpC5TV3ysUn2lxuj77I4E2taOplNKsXlPn3sNz5+5UDkFjtsxij9GrP6NK1H0T0MOJtj1LeptY0zccQ3Sjx9cJAPdtsKNnQBsWjuFmxl8CYAz+u9FD8O2C0/mNQ1sWt1RP3muermtGUXtN/mjWXKPyjpoXIt0GB3xj75xuPdIfyBMGQsYdvtefdRraWWl+YMpKsv9rQe/PMyE8nneni5ZtoEj7U3nLQkCvMRATXUadK4VgpCbCBzY4JTyKTL/8EiO60ShEYJfoExtvjBDBIXqJnctC/oV6f2IC24ohZUVg24dIZFcQGLx1LOeBLgI7N0PflvPB+5IZ21/dH KaoYS45S w4TnV2AP2ekXcGWR8DOoD98IpTN3g73N0D46LgEsaDqD0F5r9LsjqEAATMw+0wo2ikZaludJNs0rZkg/nNqcHsk828oHsdA83whV2p8s3f8MC0dG2OQTueAhAMasasWku6C27ESNJIvbK+6we+sgw6A8PnHpjqlfneVGRIQlNEn/+XgTAWeailtwS762u6zSvHMB37fqY4O4SG0llTy2+lcJXR87rSyzclrCZj+7YzMYL2bB3lmFRqcBOeMwiq3b+wd2DqQr4W35dclE9/VltUd8fVEL3NVdnKKAx8yvMi88WnDUXZMQuSbcfNgGSlvVxrn2k5pyeVllZ2cwJPNgThWigj6X6USzsIryFLdsQv+/5oSIlWMkkrxp04OLU8jf3mAiniTJE6SFUFymtkfSm+IRnGkykYTJxumgMskwlIY9hgL1byfDE8FLADQDquET/MxbxUUOmdL0xfSnphq7Rzlff83wj9QDVDyVe0E3IVmmyrXQyUuXqtBG1M0uFJvlh+U9iFBz8vzDkny8Og3D2mAECi/sdfRvcc/qZgzhWs35i0zE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's free up some space on 32bit in page[1] by moving the _pincount to page[2]. Ordinary folios only use the entire mapcount with PMD mappings, so order-1 folios don't apply. Similarly, hugetlb folios are always larger than order-1, turning the entire mapcount essentially unused for all order-1 folios. Moving it to order-1 folios will not change anything. On 32bit, simply check in folio_entire_mapcount() whether we have an order-1 folio, and return 0 in that case. Note that THPs on 32bit are not particularly common (and we don't care too much about performance), but we want to keep it working reliably, because likely we want to use large folios there as well in the future, independent of PMD leaf support. Once we dynamically allocate "struct folio", the 32bit specifics will go away again; even small folios could then have a pincount. Signed-off-by: David Hildenbrand --- include/linux/mm.h | 2 ++ include/linux/mm_types.h | 3 ++- mm/internal.h | 5 +++-- mm/page_alloc.c | 12 ++++++++---- 4 files changed, 15 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 1a4ee028a851e..9c1290588a11e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1333,6 +1333,8 @@ static inline int is_vmalloc_or_module_addr(const void *x) static inline int folio_entire_mapcount(const struct folio *folio) { VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); + if (!IS_ENABLED(CONFIG_64BIT) && unlikely(folio_large_order(folio) == 1)) + return 0; return atomic_read(&folio->_entire_mapcount) + 1; } diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 31f466d8485bc..c83dd2f1ee25e 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -385,9 +385,9 @@ struct folio { union { struct { atomic_t _large_mapcount; - atomic_t _entire_mapcount; atomic_t _nr_pages_mapped; #ifdef CONFIG_64BIT + atomic_t _entire_mapcount; atomic_t _pincount; #endif /* CONFIG_64BIT */ }; @@ -409,6 +409,7 @@ struct folio { /* public: */ struct list_head _deferred_list; #ifndef CONFIG_64BIT + atomic_t _entire_mapcount; atomic_t _pincount; #endif /* !CONFIG_64BIT */ /* private: the union with struct page is transitional */ diff --git a/mm/internal.h b/mm/internal.h index d33db24c8b17b..ffdc91b19322e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -721,10 +721,11 @@ static inline void prep_compound_head(struct page *page, unsigned int order) folio_set_order(folio, order); atomic_set(&folio->_large_mapcount, -1); - atomic_set(&folio->_entire_mapcount, -1); atomic_set(&folio->_nr_pages_mapped, 0); - if (IS_ENABLED(CONFIG_64BIT) || order > 1) + if (IS_ENABLED(CONFIG_64BIT) || order > 1) { atomic_set(&folio->_pincount, 0); + atomic_set(&folio->_entire_mapcount, -1); + } if (order > 1) INIT_LIST_HEAD(&folio->_deferred_list); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3dff99cc54161..7036530bd1bca 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -947,10 +947,6 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page) switch (page - head_page) { case 1: /* the first tail page: these may be in place of ->mapping */ - if (unlikely(folio_entire_mapcount(folio))) { - bad_page(page, "nonzero entire_mapcount"); - goto out; - } if (unlikely(folio_large_mapcount(folio))) { bad_page(page, "nonzero large_mapcount"); goto out; @@ -960,6 +956,10 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page) goto out; } if (IS_ENABLED(CONFIG_64BIT)) { + if (unlikely(atomic_read(&folio->_entire_mapcount) + 1)) { + bad_page(page, "nonzero entire_mapcount"); + goto out; + } if (unlikely(atomic_read(&folio->_pincount))) { bad_page(page, "nonzero pincount"); goto out; @@ -973,6 +973,10 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page) goto out; } if (!IS_ENABLED(CONFIG_64BIT)) { + if (unlikely(atomic_read(&folio->_entire_mapcount) + 1)) { + bad_page(page, "nonzero entire_mapcount"); + goto out; + } if (unlikely(atomic_read(&folio->_pincount))) { bad_page(page, "nonzero pincount"); goto out;