From patchwork Wed Aug 16 15:11:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13355393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FEA2C001E0 for ; Wed, 16 Aug 2023 15:13:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1317C28002A; Wed, 16 Aug 2023 11:13:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0DF58280021; Wed, 16 Aug 2023 11:13:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE93228002A; Wed, 16 Aug 2023 11:13:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E0F2F280021 for ; Wed, 16 Aug 2023 11:13:57 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 735BD80E90 for ; Wed, 16 Aug 2023 15:13:57 +0000 (UTC) X-FDA: 81130312914.12.07D36F4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id A6C8080023 for ; Wed, 16 Aug 2023 15:12:33 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=V4d1z6UW; dmarc=none; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692198753; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jXAd6te3eevEbVqabkMpBNzRJH0aU7U0+nA6iByh2bk=; b=vvH/TxxC6rJZVf7oCjY9Mw11ctek2uKVuHaOL7JmdIVOcYah2rdRwWmXQn+DxioiJfjpTW O4jyMuZlhtJdOrELnWDeQ0/JWPnTSVdLRC1QkHotP6ulfh2HRJGX0xJYUzouCgq5DBCLu1 i3lhLpy+hAdbFpmcLAehUg1gmfkVKIQ= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=V4d1z6UW; dmarc=none; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692198753; a=rsa-sha256; cv=none; b=3RutR6mV+wikbF+GY4O0rug8ewRZc/fjHkdHmXKIQfEbmNtVJH/N9tArl22n02SmlHSXf2 c8NCoMalDv0vM3cvrJjgVd1uvEmFcv6Kho+zc47+MYIYalF1A5VWGnNgci6QZbx63FwJNL Esr+lLOyspdykjaE4TPZf7nJ7Gxrf6I= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jXAd6te3eevEbVqabkMpBNzRJH0aU7U0+nA6iByh2bk=; b=V4d1z6UWihqSb/dNTj9KkjPJjC or7aj4YOKjDW9jwii8ErZHlhHochwR/HSrYKVXI3b1zrI8iUnkC0u+Yr9/rMo1Qi7PhoSTyiOAnt4 yQgVLhmctuEGzQOEWtSrAtXo4ShPRDQL8SgC8P1r/xp1L2ePmGScEChQM2z2aQdairZLStValuMTd Vq0e9Is6O6Duw44HTFIkGHuGTk4im0XJqTI4P8OzNbR6JXuJ/tBfZGw4CUDf4b+XiuaVyyF3LArSy znBt2uuXD0y8iQotfzRJPM8f37dXfsx6V34jcQuQ85JZKPCyVEa2vSiWOwzrbXxcXY3ZS9p+vS7Fy XSd7dsjw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qWIBt-00FL8P-Uf; Wed, 16 Aug 2023 15:12:09 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Jens Axboe , io-uring@vger.kernel.org, linux-mm@kvack.org, David Hildenbrand Subject: [PATCH v2 01/13] io_uring: Stop calling free_compound_page() Date: Wed, 16 Aug 2023 16:11:49 +0100 Message-Id: <20230816151201.3655946-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230816151201.3655946-1-willy@infradead.org> References: <20230816151201.3655946-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: A6C8080023 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 53bh8rus8smguciruyx5xef7a366ucjf X-HE-Tag: 1692198753-856665 X-HE-Meta: U2FsdGVkX1+rqnXZQ7TSQXwpRNYyN96fTWv72JhrBmRefZzF5xynr58GFVG534fCHddG3ajYpKfNwaqmBFlo9CIcvapF9nSGrvEcOg8EOhw5uDJddE7cejpuZayyn/chwXRbE9rcNC14slxfg/DYr/KZfGnxCqqeFDGiq2sorSQ41DB0OxRt6+NSjdkccNhTQg1ZvNGYhNirfchABx2FowbVy8L/7iOsjgSe+ueYZHamQTjPKBQFleeq6lWK0H/NFBhhCdLfPKnh7o7ZZWkmfWTe2w1+Yx07/4ndzfTW+1bcP4VenYTwLfMVNqxAgW1w6dbStCkURPnjw9QiRasw74tkwtLuqPRNsZfl66NX98SPfmdyxugGxSyXeIjjKVlrscxgaNWdFzeKQ+zNc1bc0IIayhmP+wDjxGXyag4EUB9VxraNUu2Wm/3avUz58I3aZuj9m6nxAAhUCgHQxfL6oER7rIxfYSXSxElM8WVhuiiPsz/Hh/XkDosL4W2hgHtR0/VVRbXM45rY47XBljAK42swfEvhKLd2WvtG/1fzEkMC8Qr/vmBN3KREBAcbADck3kwT6n/sKWek2J66holjPrZEL3ljdbKZ+2MNlgxrj0p2SkxH+PynyC9GSlfh1QPBCIL2+3LoQw4kqaPmIKI1c8OawfIKe2bmuWe2nA0X1T5tO7U0fOtQ7dujCPXFegEbJVTzyayuXDoF3vd9vZW2pcGmLkSyl+vLuQKM6cdO4WkCO07b/G34SZeM9h7ObOuuGC8a5Z50d0Zw1ZFMu/HL6skDU4C1d91CYsTOmn4E+DX29BSI12txQPL3nQ+SQGEB5tinV9PsXgkLaV1mA7h9W0veRJrWS29Hb/mkWAThWRk4iZc7AUh84TgCV+O0rxlDUCYzZ8QQACAGAa9gvh8oYtcSOtwdBFUiUCl9nTH2/NxjHrxc4AV9tuAbd7ZyqP3mU3xH4cL4NqlieMlNrvh yuFaOaWb TO6lHDZlYleVw0E2YI4JK7lj65q4AtItHOG3Ii8+DxWcmBogBLcQwCbnvCW4H48uMZWaugMGQB/T6JnFBrGW1EUS9p/l4xcN6Jsan3U30y9LUwepSURmwRo8uAYuFIVde7NIf/W2gVrv/iT6nYzITWKsNwHaniNGP0GU0V9IEvT/eOexy7uC1OnN8nkThY7YADktbaK58AD6K3Xligt2yafwZAbDZ7NklMR478svfNiUMFuHYNM7N7mptvA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: folio_put() is the standard way to write this, and it's not appreciably slower. This is an enabling patch for removing free_compound_page() entirely. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: David Hildenbrand Reviewed-by: Jens Axboe --- io_uring/io_uring.c | 6 +----- io_uring/kbuf.c | 6 +----- 2 files changed, 2 insertions(+), 10 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index dcf5fc7d2820..a5b9b5de7aff 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -2664,14 +2664,10 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events, static void io_mem_free(void *ptr) { - struct page *page; - if (!ptr) return; - page = virt_to_head_page(ptr); - if (put_page_testzero(page)) - free_compound_page(page); + folio_put(virt_to_folio(ptr)); } static void io_pages_free(struct page ***pages, int npages) diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c index 2f0181521c98..556f4df25b0f 100644 --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -218,11 +218,7 @@ static int __io_remove_buffers(struct io_ring_ctx *ctx, if (bl->is_mapped) { i = bl->buf_ring->tail - bl->head; if (bl->is_mmap) { - struct page *page; - - page = virt_to_head_page(bl->buf_ring); - if (put_page_testzero(page)) - free_compound_page(page); + folio_put(virt_to_folio(bl->buf_ring)); bl->buf_ring = NULL; bl->is_mmap = 0; } else if (bl->buf_nr_pages) { From patchwork Wed Aug 16 15:11:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13355387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E750C04A6A for ; Wed, 16 Aug 2023 15:13:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DDB81280023; Wed, 16 Aug 2023 11:13:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D89B9280021; Wed, 16 Aug 2023 11:13:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C50B2280023; Wed, 16 Aug 2023 11:13:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B8431280021 for ; Wed, 16 Aug 2023 11:13:05 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 92CB7B296D for ; Wed, 16 Aug 2023 15:13:05 +0000 (UTC) X-FDA: 81130310730.11.D4B90EF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 408A040232 for ; Wed, 16 Aug 2023 15:12:16 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=THK3UzQT; dmarc=none; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692198737; a=rsa-sha256; cv=none; b=FGfMwmHVXFglQa2LjIzxF1PGDbsFxDjRp2yaVmM0YHNKeQAuN2QFZlGitwURABeFctUTws WtBmrmCUVgfX49Yal4yCrQWGTm4yGpuBDRULsykyZw/jzCmhYE1WF4JgQzO8XGpv7sV8Zs gOeiZrBs4H5F6kVDYAj4epeeS5K07XI= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=THK3UzQT; dmarc=none; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692198737; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7hrNLrvBiputkK0Ez5NemxjwUyUDBXLdxg6jqp/GSn4=; b=Nau2DxkHb5jOf8uVBsvEShQQfxnwH06Ys+J+/AzOvUg5ZTro7KOyamNVG8EkViwlI7286z GKSs4+umF5ebC4Shdu4OrLLkB3xOiTJchkyHL9RdlG7s9/Ap8nJCVhNEv8YN6k4xfzUjQz ARCxvuNk216ExU9cHS5FMRb9hCcKoQs= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7hrNLrvBiputkK0Ez5NemxjwUyUDBXLdxg6jqp/GSn4=; b=THK3UzQTMK0YVS/5sddRPJdEQz yzCRsqAftx1rcqtnfHPKVKCb1IJCJpqBMVJHvItV3dqPitxX1z+gd850NiqENXc15t6SrL9STeJyS 7qaBCI3YB4eRNvWq56ASqo6kwwLjxAW9mBqFHN1H7dDhzNBWWr5haIdfIpftOZ2kejf0J32b24CDs TrXd0Jf5/37br0+ab4qgGHO87F/Pi3HqL8HzX35v9piXwQlTUJGdsmDyNXuaM+yogo6vFd0LHS2Pn f/LBd40mpafoHB5cBIy4ag6iyrH6s8te6miyvm+SFaVIeJICVRp1iJsEuYR4obHQNsAfRwhpYg6J8 73YEidLw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qWIBu-00FL8R-17; Wed, 16 Aug 2023 15:12:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Jens Axboe , io-uring@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 02/13] mm: Call free_huge_page() directly Date: Wed, 16 Aug 2023 16:11:50 +0100 Message-Id: <20230816151201.3655946-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230816151201.3655946-1-willy@infradead.org> References: <20230816151201.3655946-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 408A040232 X-Stat-Signature: p5q67thhkgqrzf9xkskrc3gwdcrxrcws X-HE-Tag: 1692198736-335945 X-HE-Meta: U2FsdGVkX18+xWHOIhkxyWN5jr+hxUhVCttluYw40wbXoCKJV2Zy2jsN+ej1fWbjsiRegwbhhxUp6JE3ySE//KXnFBk1KStd5A+0Mor3a74rdysrk26JStz8HH5iuTZRF1woZUaraPgtfsPBzRTu9SY1PKNEK0e5BHoOpShaLqn7rhZZAlXFNJ39bo9bEENUdpG2XGOdLvBMmbqfu2IyydeX0GNJmGYc8XOFzSXBt7fwmqafTq3zm0V2KUA7puUXZwwbW4cs0K1AEyU0pCBGsGWe2lYw/zyabolfhdZM4PGg4m1AbmzQA3AnrfxwoQ2Z5b/YGU1f/3GY9Do3h2uNveHzMZVqFuCdDkZDicq6LH7NB7wbWdNlsMIIsaGXmY+OLBkxaJIbLNn40cEI+SUcDtq9zD8ETyQk4QHulI9zeIHs0GTG/AyzeTpByaxU0D2MG+6i2K8nV+9LwJF6DvfMa08BBc532kmGqBAKZX7w58oxrcTIV9MI7wcOBiuZj9VZZgz6F1rFaQmzwaca3e5C+ADNy3K3eiiaYTcHT4DGuSYYnjSwCYO/nlFSEpTjvRuaoyaORWeN7hD4Te3tiwKJACBdRIbmb20V3TAHmlGgUjLsRD/pXpxM8b/wyeLDy71vj4fqGBnTPTtxkMDhQIyeMehoEfa29n7yDp+nJgSSOjC5+jVX7qXsaJD49ne+3ZGdLcCuyNU0NlJMDrCWxGfzPePmQw3hdMitDs5GpBDNKm2h684M2whJirYrZxF5555j9s2GtNl42ZHT6d1K8e8FJMFi5N1qPGesZoHDc75n6uKgHmswHEVYTF4fG7ZPVcMMJ/INcmQ/mX4kPk7B6p4NQ4wRDNzMLMRVKpkVIF6+EP56V7nLk+Ghf8nrLccTV7/+vPw44iUTYrIG+w+d1GzMIhR3uCjzfSKb+xxn77qLPbmxB89xl1ZyLGDhWVzkLkGY/M5W4+d3jNhogyDV85y WgPWBWFi ouVaatFbDJaJou2JMQWnA3nuYAMKxKTsEi6K8DFu2Bck8Igexy8pZ1YKz//aepBgTTWly+IVzEy6OsWy8EJ+myhYipQshAofEj2LcoSFs1oMdYimiqWCV+geK15tnLBocdZa0U2oHIO80PwSAqx8F/Sqe8M8xHRii57KUdKZIwdLcpDr7LBhQxYZBSb/ROZK5F3ud+p3iBrT5lPcmwT5+XOZMz2bqXt65GQag X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Indirect calls are expensive, thanks to Spectre. Call free_huge_page() directly if the folio belongs to hugetlb. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/hugetlb.h | 3 ++- mm/page_alloc.c | 8 +++++--- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 0a393bc02f25..5a1dfaffbd80 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -26,6 +26,8 @@ typedef struct { unsigned long pd; } hugepd_t; #define __hugepd(x) ((hugepd_t) { (x) }) #endif +void free_huge_page(struct page *page); + #ifdef CONFIG_HUGETLB_PAGE #include @@ -165,7 +167,6 @@ int get_huge_page_for_hwpoison(unsigned long pfn, int flags, bool *migratable_cleared); void folio_putback_active_hugetlb(struct folio *folio); void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int reason); -void free_huge_page(struct page *page); void hugetlb_fix_reserve_counts(struct inode *inode); extern struct mutex *hugetlb_fault_mutex_table; u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8fe9ff917850..548c8016190b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -287,9 +287,6 @@ const char * const migratetype_names[MIGRATE_TYPES] = { static compound_page_dtor * const compound_page_dtors[NR_COMPOUND_DTORS] = { [NULL_COMPOUND_DTOR] = NULL, [COMPOUND_PAGE_DTOR] = free_compound_page, -#ifdef CONFIG_HUGETLB_PAGE - [HUGETLB_PAGE_DTOR] = free_huge_page, -#endif #ifdef CONFIG_TRANSPARENT_HUGEPAGE [TRANSHUGE_PAGE_DTOR] = free_transhuge_page, #endif @@ -622,6 +619,11 @@ void destroy_large_folio(struct folio *folio) { enum compound_dtor_id dtor = folio->_folio_dtor; + if (folio_test_hugetlb(folio)) { + free_huge_page(&folio->page); + return; + } + VM_BUG_ON_FOLIO(dtor >= NR_COMPOUND_DTORS, folio); compound_page_dtors[dtor](&folio->page); } From patchwork Wed Aug 16 15:11:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13355388 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA84AC04FDF for ; Wed, 16 Aug 2023 15:13:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 32A9E280024; Wed, 16 Aug 2023 11:13:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2DBCC280021; Wed, 16 Aug 2023 11:13:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17C7F280024; Wed, 16 Aug 2023 11:13:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 03C83280021 for ; Wed, 16 Aug 2023 11:13:07 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 49B12C0339 for ; Wed, 16 Aug 2023 15:13:06 +0000 (UTC) X-FDA: 81130310772.06.C59E32F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id D2A2DA02EB for ; Wed, 16 Aug 2023 15:12:15 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BTcAFsHY; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692198736; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+84Xamq1BwF9kuhmcyg5RE+NbnD1rvqVFR4weQ2kreI=; b=ZWKEGpyzJd0ffB0HMCHXiZ/32BjHYKcFZ24PyLCKjbWtmDaTOdgcCeM5k4mtDHq6ESL4FR qpInfaX/q32kecdmhQpZWOmRgpqYZQbn+IQ+d5Wt1bxpA1oVbsl1IiDo46vDWwMY48BA9e W1cjvVG+N0LhHgcCGPGxArj726TVXGs= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BTcAFsHY; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692198736; a=rsa-sha256; cv=none; b=S8PkBea6dU/Af77HlrTwKYW5DHrLNT7iXIkYMDZE502nZxKifnPfjHEcGEuv8dfXM6Yaka hrpMrqN/5bi4QeUOb73mdeiikchIfJd+SMbgQOYGcZdfHTQmubzo+bN24eJpqU3Aevap2x Y2OjLu0BkeTjUzLg/dtDXkwCPUZRclU= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:Content-Type: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-ID:Content-Description; bh=+84Xamq1BwF9kuhmcyg5RE+NbnD1rvqVFR4weQ2kreI=; b=BTcAFsHYbrWS4w+olPzG8d2rj1 esk+oQT/jKnc4xYCle83gKpL2PZYGmmskmImDMhedEdo8xcacN2//QQOLoZYzhSRs6saXjBwUVPfj cYTrYr5VOMyfpMBYU+LI74CMoa7WB6Y3gU0iFIwda5lmRcSIBcJer0oGbyLc//apMO+pZuA78wUut u4FNzM3f3EjZhT3SMm4PlSjvVwm46qK1TrWY7bU2ap880yhTfOGHCozDUD20x1Tku70qmBcFmUJlX 44QVflIRIjMANgBG62dcxLNhb3Az2XPttXa7ax3qv5ccO0y2P/VYlb6FjYKDVmztY5dirr/SAzg8t 1RdAzfhQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qWIBu-00FL8T-3q; Wed, 16 Aug 2023 15:12:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Jens Axboe , io-uring@vger.kernel.org, linux-mm@kvack.org, Yanteng Si Subject: [PATCH v2 03/13] mm: Convert free_huge_page() to free_huge_folio() Date: Wed, 16 Aug 2023 16:11:51 +0100 Message-Id: <20230816151201.3655946-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230816151201.3655946-1-willy@infradead.org> References: <20230816151201.3655946-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: D2A2DA02EB X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: abz6gkyf9ow4u7fzyu67g5895cb9u4rw X-HE-Tag: 1692198735-399310 X-HE-Meta: U2FsdGVkX1+xUk2DDi93gId/94ie61Me06tAUSoYIOLU0TnY/w3pqSaw/wIhR4eYMKIF/8CSn+5NFoFdxmKDXhjm7bBG7xkogaKgAcWEOHGH7AZEuJpNGmW87TJH0zxZEgbIJcODShS76W/3Mh8WrxO3tbBRhfZ38fO/UMJ2v1fNeq0Op8mDzg80EdAtiILBY0MR3wz4quK1DO82/P5RtMEQAUjuYsnOgzARBZVQRtXIHoia/u05cCb4xC3aGgsa6AKqRlSZDqZ2bZXn9dRNlCEzxjAe7ihFQFFycNOvK5fbYCRBqJK/NscIXkhqo2gg9T3AVsuxEpPkC2ndx8gn9VNksPJUBdF/EksZPSlUaswwXuZTENaXxFmEb8gW/aBIe5p/m0u9yAYYRyTTw4Q/bvrcaTsL39pDY1UKeB+fAAWbwe93f66KTXm7U/ArCizsDFASAl3Z2plnN9/ndNw+qRXnScLQB1FYvFWL9jtz5tPlpSl4SCaDco3nzrcf/UWFT9gi6xpt+wmyCb2GJxduqRmwIR0sqxI2Tp5x21MBmnXkyYiHwi9Ag4BHCXgYGRO3byvB7K/VSDocIt07olxTwWSgFYWBUBE414uOXmP+aiqWOHHjo1Bs4R9S2H2g8GBR5vVPzoJ6w/z+2NxvnXcGrGteMf+XafhEYQXTD11pDWAzxTSFtp4FPcAKcma79Dnufcbw/x4VVzsHmXVen4k4Yh27xIQj76RQqiP0RqK7j3EQ/SFWBiZdPbWRz1Qr1BP2w3wNrr2T8btaF+278DFHiYn+/I0aGKy74mR3jkkJjGXu4lAt6kl0uXyy3nT+LWnlj9k54a38Hbw3Zg8NvKuCarIL+EQWZLe0z0Clg844N/F9kB0cXCbDlDLrfx1mP3JBZbYvFA+A71JzN+IsandStYAMMbvEFAPes/8ijbeTitHu17hiHA8ym8cd6h+Zw/37mvIP+ey9nMLEsOrOz5G nZEjnKAz xAnoFgE52ImTIPokp8l3mVctFO/7VKBK74CJauLUgjIQRUJvR17LBt3I73tnGTOZU6xAL+sR3CDvBlcvYbACib6xdsMSX0DAvkFm8++BYagguOKLIXxCfEPwwikUdf3swMJFck1wIvGyLEXqchvJ9xkHt8EJnD/GrQ9PoGYHd7tlJ4nw3Y7AzIiNEduQhJ8OA2+0aDa+YziLqpvEoqQAZ9dvx47jl+1Nci9PFy096CR3qC3M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pass a folio instead of the head page to save a few instructions. Update the documentation, at least in English. Signed-off-by: Matthew Wilcox (Oracle) Cc: Yanteng Si Reviewed-by: Sidhartha Kumar --- Documentation/mm/hugetlbfs_reserv.rst | 14 +++--- .../zh_CN/mm/hugetlbfs_reserv.rst | 4 +- include/linux/hugetlb.h | 2 +- mm/hugetlb.c | 48 +++++++++---------- mm/page_alloc.c | 2 +- 5 files changed, 34 insertions(+), 36 deletions(-) diff --git a/Documentation/mm/hugetlbfs_reserv.rst b/Documentation/mm/hugetlbfs_reserv.rst index d9c2b0f01dcd..4914fbf07966 100644 --- a/Documentation/mm/hugetlbfs_reserv.rst +++ b/Documentation/mm/hugetlbfs_reserv.rst @@ -271,12 +271,12 @@ to the global reservation count (resv_huge_pages). Freeing Huge Pages ================== -Huge page freeing is performed by the routine free_huge_page(). This routine -is the destructor for hugetlbfs compound pages. As a result, it is only -passed a pointer to the page struct. When a huge page is freed, reservation -accounting may need to be performed. This would be the case if the page was -associated with a subpool that contained reserves, or the page is being freed -on an error path where a global reserve count must be restored. +Huge pages are freed by free_huge_folio(). It is only passed a pointer +to the folio as it is called from the generic MM code. When a huge page +is freed, reservation accounting may need to be performed. This would +be the case if the page was associated with a subpool that contained +reserves, or the page is being freed on an error path where a global +reserve count must be restored. The page->private field points to any subpool associated with the page. If the PagePrivate flag is set, it indicates the global reserve count should @@ -525,7 +525,7 @@ However, there are several instances where errors are encountered after a huge page is allocated but before it is instantiated. In this case, the page allocation has consumed the reservation and made the appropriate subpool, reservation map and global count adjustments. If the page is freed at this -time (before instantiation and clearing of PagePrivate), then free_huge_page +time (before instantiation and clearing of PagePrivate), then free_huge_folio will increment the global reservation count. However, the reservation map indicates the reservation was consumed. This resulting inconsistent state will cause the 'leak' of a reserved huge page. The global reserve count will diff --git a/Documentation/translations/zh_CN/mm/hugetlbfs_reserv.rst b/Documentation/translations/zh_CN/mm/hugetlbfs_reserv.rst index b7a0544224ad..0f7e7fb5ca8c 100644 --- a/Documentation/translations/zh_CN/mm/hugetlbfs_reserv.rst +++ b/Documentation/translations/zh_CN/mm/hugetlbfs_reserv.rst @@ -219,7 +219,7 @@ vma_commit_reservation()之间,预留映射有可能被改变。如果hugetlb_ 释放巨页 ======== -巨页释放是由函数free_huge_page()执行的。这个函数是hugetlbfs复合页的析构器。因此,它只传 +巨页释放是由函数free_huge_folio()执行的。这个函数是hugetlbfs复合页的析构器。因此,它只传 递一个指向页面结构体的指针。当一个巨页被释放时,可能需要进行预留计算。如果该页与包含保 留的子池相关联,或者该页在错误路径上被释放,必须恢复全局预留计数,就会出现这种情况。 @@ -387,7 +387,7 @@ region_count()在解除私有巨页映射时被调用。在私有映射中,预 然而,有几种情况是,在一个巨页被分配后,但在它被实例化之前,就遇到了错误。在这种情况下, 页面分配已经消耗了预留,并进行了适当的子池、预留映射和全局计数调整。如果页面在这个时候被释放 -(在实例化和清除PagePrivate之前),那么free_huge_page将增加全局预留计数。然而,预留映射 +(在实例化和清除PagePrivate之前),那么free_huge_folio将增加全局预留计数。然而,预留映射 显示报留被消耗了。这种不一致的状态将导致预留的巨页的 “泄漏” 。全局预留计数将比它原本的要高, 并阻止分配一个预先分配的页面。 diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 5a1dfaffbd80..5b2626063f4f 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -26,7 +26,7 @@ typedef struct { unsigned long pd; } hugepd_t; #define __hugepd(x) ((hugepd_t) { (x) }) #endif -void free_huge_page(struct page *page); +void free_huge_folio(struct folio *folio); #ifdef CONFIG_HUGETLB_PAGE diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e327a5a7602c..086eb51bf845 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1706,10 +1706,10 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio, zeroed = folio_put_testzero(folio); if (unlikely(!zeroed)) /* - * It is VERY unlikely soneone else has taken a ref on - * the page. In this case, we simply return as the - * hugetlb destructor (free_huge_page) will be called - * when this other ref is dropped. + * It is VERY unlikely soneone else has taken a ref + * on the folio. In this case, we simply return as + * free_huge_folio() will be called when this other ref + * is dropped. */ return; @@ -1875,13 +1875,12 @@ struct hstate *size_to_hstate(unsigned long size) return NULL; } -void free_huge_page(struct page *page) +void free_huge_folio(struct folio *folio) { /* * Can't pass hstate in here because it is called from the * compound page destructor. */ - struct folio *folio = page_folio(page); struct hstate *h = folio_hstate(folio); int nid = folio_nid(folio); struct hugepage_subpool *spool = hugetlb_folio_subpool(folio); @@ -1936,7 +1935,7 @@ void free_huge_page(struct page *page) spin_unlock_irqrestore(&hugetlb_lock, flags); update_and_free_hugetlb_folio(h, folio, true); } else { - arch_clear_hugepage_flags(page); + arch_clear_hugepage_flags(&folio->page); enqueue_hugetlb_folio(h, folio); spin_unlock_irqrestore(&hugetlb_lock, flags); } @@ -2246,7 +2245,7 @@ static int alloc_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, folio = alloc_fresh_hugetlb_folio(h, gfp_mask, node, nodes_allowed, node_alloc_noretry); if (folio) { - free_huge_page(&folio->page); /* free it into the hugepage allocator */ + free_huge_folio(folio); /* free it into the hugepage allocator */ return 1; } } @@ -2429,13 +2428,13 @@ static struct folio *alloc_surplus_hugetlb_folio(struct hstate *h, * We could have raced with the pool size change. * Double check that and simply deallocate the new page * if we would end up overcommiting the surpluses. Abuse - * temporary page to workaround the nasty free_huge_page + * temporary page to workaround the nasty free_huge_folio * codeflow */ if (h->surplus_huge_pages >= h->nr_overcommit_huge_pages) { folio_set_hugetlb_temporary(folio); spin_unlock_irq(&hugetlb_lock); - free_huge_page(&folio->page); + free_huge_folio(folio); return NULL; } @@ -2547,8 +2546,7 @@ static int gather_surplus_pages(struct hstate *h, long delta) __must_hold(&hugetlb_lock) { LIST_HEAD(surplus_list); - struct folio *folio; - struct page *page, *tmp; + struct folio *folio, *tmp; int ret; long i; long needed, allocated; @@ -2608,21 +2606,21 @@ static int gather_surplus_pages(struct hstate *h, long delta) ret = 0; /* Free the needed pages to the hugetlb pool */ - list_for_each_entry_safe(page, tmp, &surplus_list, lru) { + list_for_each_entry_safe(folio, tmp, &surplus_list, lru) { if ((--needed) < 0) break; /* Add the page to the hugetlb allocator */ - enqueue_hugetlb_folio(h, page_folio(page)); + enqueue_hugetlb_folio(h, folio); } free: spin_unlock_irq(&hugetlb_lock); /* * Free unnecessary surplus pages to the buddy allocator. - * Pages have no ref count, call free_huge_page directly. + * Pages have no ref count, call free_huge_folio directly. */ - list_for_each_entry_safe(page, tmp, &surplus_list, lru) - free_huge_page(page); + list_for_each_entry_safe(folio, tmp, &surplus_list, lru) + free_huge_folio(folio); spin_lock_irq(&hugetlb_lock); return ret; @@ -2836,11 +2834,11 @@ static long vma_del_reservation(struct hstate *h, * 2) No reservation was in place for the page, so hugetlb_restore_reserve is * not set. However, alloc_hugetlb_folio always updates the reserve map. * - * In case 1, free_huge_page later in the error path will increment the - * global reserve count. But, free_huge_page does not have enough context + * In case 1, free_huge_folio later in the error path will increment the + * global reserve count. But, free_huge_folio does not have enough context * to adjust the reservation map. This case deals primarily with private * mappings. Adjust the reserve map here to be consistent with global - * reserve count adjustments to be made by free_huge_page. Make sure the + * reserve count adjustments to be made by free_huge_folio. Make sure the * reserve map indicates there is a reservation present. * * In case 2, simply undo reserve map modifications done by alloc_hugetlb_folio. @@ -2856,7 +2854,7 @@ void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma, * Rare out of memory condition in reserve map * manipulation. Clear hugetlb_restore_reserve so * that global reserve count will not be incremented - * by free_huge_page. This will make it appear + * by free_huge_folio. This will make it appear * as though the reservation for this folio was * consumed. This may prevent the task from * faulting in the folio at a later time. This @@ -3232,7 +3230,7 @@ static void __init gather_bootmem_prealloc(void) if (prep_compound_gigantic_folio(folio, huge_page_order(h))) { WARN_ON(folio_test_reserved(folio)); prep_new_hugetlb_folio(h, folio, folio_nid(folio)); - free_huge_page(page); /* add to the hugepage allocator */ + free_huge_folio(folio); /* add to the hugepage allocator */ } else { /* VERY unlikely inflated ref count on a tail page */ free_gigantic_folio(folio, huge_page_order(h)); @@ -3264,7 +3262,7 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) &node_states[N_MEMORY], NULL); if (!folio) break; - free_huge_page(&folio->page); /* free it into the hugepage allocator */ + free_huge_folio(folio); /* free it into the hugepage allocator */ } cond_resched(); } @@ -3542,7 +3540,7 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, while (count > persistent_huge_pages(h)) { /* * If this allocation races such that we no longer need the - * page, free_huge_page will handle it by freeing the page + * page, free_huge_folio will handle it by freeing the page * and reducing the surplus. */ spin_unlock_irq(&hugetlb_lock); @@ -3658,7 +3656,7 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio) prep_compound_page(subpage, target_hstate->order); folio_change_private(inner_folio, NULL); prep_new_hugetlb_folio(target_hstate, inner_folio, nid); - free_huge_page(subpage); + free_huge_folio(inner_folio); } mutex_unlock(&target_hstate->resize_lock); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 548c8016190b..b569fd5562aa 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -620,7 +620,7 @@ void destroy_large_folio(struct folio *folio) enum compound_dtor_id dtor = folio->_folio_dtor; if (folio_test_hugetlb(folio)) { - free_huge_page(&folio->page); + free_huge_folio(folio); return; } From patchwork Wed Aug 16 15:11:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13355395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EAFFC04FE0 for ; Wed, 16 Aug 2023 15:14:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BB838280027; Wed, 16 Aug 2023 11:14:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B4181280021; Wed, 16 Aug 2023 11:14:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A0753280027; Wed, 16 Aug 2023 11:14:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 923BE280021 for ; Wed, 16 Aug 2023 11:14:05 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 61F831A029E for ; Wed, 16 Aug 2023 15:14:05 +0000 (UTC) X-FDA: 81130313250.02.8219890 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id 65A83A0247 for ; Wed, 16 Aug 2023 15:12:18 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ALXkIbks; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692198738; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TyqYL8l3qs+xOFQuesU5MOYpRgSi95CwYfVSpPbXZWA=; b=GMsm8cEKgh2SGPjtx/WpGpQjCS0chEf+RtqFUDE7ffvX5alCeo4TlUpuyf4+LsxbnMM+TS oQYqU7efyyFrHbEj9C0V7Zhjp4zwiVOZ5oA2HQbR/FQYt85k0N5MBAbmJoQwL/RmuHjWfQ NwW07zavwnzleX/glYxLpl6v+kqhQE4= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ALXkIbks; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692198738; a=rsa-sha256; cv=none; b=2CxLT5y0EotEPdx+hEGSn0bg8mp306JIYlE38pOfqau6d+EdSTzlguVtzEUBcTLXtOj2kD G0uDITAAIKU3kZatSAJ/ND3WnSgbUSK8RquuReeA0V+F8vvTJlroigT+FIWoz10Sb78PkB 26R4JPF5N750tU2gOpLYWhEkMOpay4k= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=TyqYL8l3qs+xOFQuesU5MOYpRgSi95CwYfVSpPbXZWA=; b=ALXkIbksumnsHHzGJk5XV5Da9S +oitdX450l78/oEz2xFPniFUnC3jjMneq6gxMiMjPiGnWD5RrW8lA0dXRX17krh+vAgjeLvA9NFMe TVLycQ/QXKjgwI1e0b5rYbWoJzCa5W2aguQ6pY7r6tdgz860KetDC9GVekzrv/S0l5LUpQyQosutq soHHJBFf6nrX94ciA/Ihavh/nZol+9jPWipjQwVsSp+Oi3kDn7Godj/Lr9TWyC2Po/jrpyBN3TpxP 25+mO5nUueLwjlQw5XololLzaXq2mzUMOlgp2FWh5o7MkOnbMZOarM5319l6hjR64o7pePb+DAmDc u17NN8hA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qWIBu-00FL8V-75; Wed, 16 Aug 2023 15:12:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Jens Axboe , io-uring@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 04/13] mm: Convert free_transhuge_folio() to folio_undo_large_rmappable() Date: Wed, 16 Aug 2023 16:11:52 +0100 Message-Id: <20230816151201.3655946-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230816151201.3655946-1-willy@infradead.org> References: <20230816151201.3655946-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 65A83A0247 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 4aistdujo31d5mqfb4khuw35u3wfui3h X-HE-Tag: 1692198738-241945 X-HE-Meta: U2FsdGVkX1/F7ocYj/5nMN8ynMr0Ny6WUZH7l7DEkSVQtuFYD6dkmnNdqYU+6W0DPRVn9fxwPgsmEuo426CFmkrjxxVOKw1r64DwGCBbZnS3uDnvGaavcWxLxY6UhjYTowl0cBGgDbqvq/iHqGnsSV4ba2zgtNe9+hBO1TblaxMDLjEg2ESHfS9x6mCdIBLVti9Md65Mx91U6XRsSNScvIww1UJ9IDcBbMclA6WHVWACmfctMbCdozIst3ffF+Vz4XXOsLlyw6Y9rl7wxsUCCrhgaTvcUw0m77cPrP2f+uDZUTsOJov7Rpc75BaxjXDuLhz/1UYELdonywI0dNTch5Fkty88fhBN6QdYvxb47yDJOSWpGCx7dLIovGgjeoL/eZWkwO9eTx2jce/6DY4vi/mpOh68ZEhWIj2v5KkkmuVTkN5jPcrYgSDFMPw7+4oCZIc2aZQh7y0MhIZZ27pzdbaXE5J+giS6/bWV6w8jzswlbUiLyM5R+e56OxeVGoydbkmftBmYdxg5uE5hhoM4aBzo8LjOgDTYASe5tN6x+i9QhutER/luJAxRXI8ZayzSu0IL9boF6kjnCHWV2CgycCHICqIYGQpcWHmrp11D9NEUPHVpEO0rlONBCXVBrO/bc8EP7U6pDYvfJu+Z3naWRrDABaDeKa7qp6gad9H0wwHCVOswC4XIAfsQ2VDsM3OHc9gf70svleD+qwObkoLZ4+CevWyB1UOmY4OWn1KSqWWGqom83eWEXTOEeeHZmfohKhE5Mc0PI45hRgadKbl/kDK5QQyWukZ8dl9K+x2ZuA3QFVWayb8a691Ouf3bOOfKedTU+KSWeCtnur6w07h5WuLjJGOvggyUzCN4JAz+p7ieDOV4fMbvMQOIm8wGNXHuqbO0/0ZsSPHf2OjuKGDXVxfSfu/83ktHMQyYHQRh58WUcm3WeOSUP63JB4Otxr6q3sYMIVhy5L1DSg7nWJz HNYf6GNd sIEBkANDYSr+iVRi2Fo6uHoSb9RQhzW/abnhAjVVHy/QQfpYTMpPF2fpJJTMJbsp0XLCMo8xeL5MNqhOfFEY6bytEqPWfHIZqQ3NxeC69Lfk4CJgR/qcHy23UwCbuKvjjG3GniT/t3hzYJwPWqsIh8eMd6iJ6qAtD6/gXluvK+bKS3g4sbmEt4RANNEwOQT5gmROeKhcEP3EI5lpxBnVb7GzanzCTJCp97rm+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Indirect calls are expensive, thanks to Spectre. Test for TRANSHUGE_PAGE_DTOR and destroy the folio appropriately. Move the free_compound_page() call into destroy_large_folio() to simplify later patches. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/huge_mm.h | 2 -- include/linux/mm.h | 2 -- mm/huge_memory.c | 22 +++++++++++----------- mm/internal.h | 2 ++ mm/page_alloc.c | 9 ++++++--- 5 files changed, 19 insertions(+), 18 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 20284387b841..f351c3f9d58b 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -144,8 +144,6 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); void prep_transhuge_page(struct page *page); -void free_transhuge_page(struct page *page); - bool can_split_folio(struct folio *folio, int *pextra_pins); int split_huge_page_to_list(struct page *page, struct list_head *list); static inline int split_huge_page(struct page *page) diff --git a/include/linux/mm.h b/include/linux/mm.h index 19493d6a2bb8..6c338b65b86b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1281,9 +1281,7 @@ enum compound_dtor_id { #ifdef CONFIG_HUGETLB_PAGE HUGETLB_PAGE_DTOR, #endif -#ifdef CONFIG_TRANSPARENT_HUGEPAGE TRANSHUGE_PAGE_DTOR, -#endif NR_COMPOUND_DTORS, }; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8480728fa220..9598bbe6c792 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2779,10 +2779,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) return ret; } -void free_transhuge_page(struct page *page) +void folio_undo_large_rmappable(struct folio *folio) { - struct folio *folio = (struct folio *)page; - struct deferred_split *ds_queue = get_deferred_split_queue(folio); + struct deferred_split *ds_queue; unsigned long flags; /* @@ -2790,15 +2789,16 @@ void free_transhuge_page(struct page *page) * deferred_list. If folio is not in deferred_list, it's safe * to check without acquiring the split_queue_lock. */ - if (data_race(!list_empty(&folio->_deferred_list))) { - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); - if (!list_empty(&folio->_deferred_list)) { - ds_queue->split_queue_len--; - list_del(&folio->_deferred_list); - } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + if (data_race(list_empty(&folio->_deferred_list))) + return; + + ds_queue = get_deferred_split_queue(folio); + spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + if (!list_empty(&folio->_deferred_list)) { + ds_queue->split_queue_len--; + list_del(&folio->_deferred_list); } - free_compound_page(page); + spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); } void deferred_split_folio(struct folio *folio) diff --git a/mm/internal.h b/mm/internal.h index 5a03bc4782a2..1e98c867f0de 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -413,6 +413,8 @@ static inline void folio_set_order(struct folio *folio, unsigned int order) #endif } +void folio_undo_large_rmappable(struct folio *folio); + static inline void prep_compound_head(struct page *page, unsigned int order) { struct folio *folio = (struct folio *)page; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b569fd5562aa..0dbc2ecdefa5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -287,9 +287,6 @@ const char * const migratetype_names[MIGRATE_TYPES] = { static compound_page_dtor * const compound_page_dtors[NR_COMPOUND_DTORS] = { [NULL_COMPOUND_DTOR] = NULL, [COMPOUND_PAGE_DTOR] = free_compound_page, -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - [TRANSHUGE_PAGE_DTOR] = free_transhuge_page, -#endif }; int min_free_kbytes = 1024; @@ -624,6 +621,12 @@ void destroy_large_folio(struct folio *folio) return; } + if (folio_test_transhuge(folio) && dtor == TRANSHUGE_PAGE_DTOR) { + folio_undo_large_rmappable(folio); + free_compound_page(&folio->page); + return; + } + VM_BUG_ON_FOLIO(dtor >= NR_COMPOUND_DTORS, folio); compound_page_dtors[dtor](&folio->page); } From patchwork Wed Aug 16 15:11:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13355396 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FE76C001E0 for ; Wed, 16 Aug 2023 15:14:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C5550280026; Wed, 16 Aug 2023 11:14:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C02FC280021; Wed, 16 Aug 2023 11:14:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACAA3280026; Wed, 16 Aug 2023 11:14:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 9DB1A280021 for ; Wed, 16 Aug 2023 11:14:13 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id B264C1A0EEC for ; Wed, 16 Aug 2023 15:14:12 +0000 (UTC) X-FDA: 81130313544.19.9156552 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id C7CDE40253 for ; Wed, 16 Aug 2023 15:12:28 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MYS2+QQ+; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692198749; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OqHtlaBwPOyYVd0BxuxqJHAwBK92pIIYqbMfco12ie4=; b=ks+yIwd2HbbaiOwf87jWijWZYlWGMs9k97O78WTD2aJ8uGCcjoUOpLgEyby/QTRFtEYTo5 MI/IQt3+kd9Du2gobEkBwpCOGxnfNtQc/gagaxcLesxLWoYd9T4sQsVbGauToZWtMfZGpm Cz8ZZHEdL4n+JfE/Z7J2ISdJ11JnevQ= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MYS2+QQ+; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692198749; a=rsa-sha256; cv=none; b=TveOxMX+HWQw6uZ7QAguA3uzuaNjvj6ki/3nYmDxfr8BXiqHLDLE3FP6mVffA/G/3uiYPF vQK3sqLKx0aVUU/9q69vVIaT/GJj6EdcEtMOLL/K4HLeyTivtcCjxCjMerSqvHrrxzSxRE fJ0dGP0zASynBQimoSp9GaXqmYSKm7c= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OqHtlaBwPOyYVd0BxuxqJHAwBK92pIIYqbMfco12ie4=; b=MYS2+QQ+X+fK4cGw3tU2wA2sSl MjekAsM3RnYqbnq8S1LtlyrLH4UL8MVTTuK23DRXIG6iQmyAkDDzKxZ5wK85oubGzcJts7DRZeUc4 Xu0Z//4Im2awNk/ExV2MJjM/NlSvo5zxOeiRqbkSMo8A2qEpkhftSpoemyd9GQMFw3QK1Tu8hbj/U Xjy+51WMZI19LNCx/ahY8MOGBcFhMu0wQnzWWM3zW5KOnzcW4JFSSfeRXyPmTFUla6DjF6jWh59E+ 5N8G/u3ZZ4GNZni7vCgFRdeF57he5ZZhXdW+cr8JZUUXSO92jT83PMe1aLz4msy0Ub3awvjg9kR4W +3oWGX0w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qWIBu-00FL8X-9l; Wed, 16 Aug 2023 15:12:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Jens Axboe , io-uring@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 05/13] mm; Convert prep_transhuge_page() to folio_prep_large_rmappable() Date: Wed, 16 Aug 2023 16:11:53 +0100 Message-Id: <20230816151201.3655946-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230816151201.3655946-1-willy@infradead.org> References: <20230816151201.3655946-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: C7CDE40253 X-Stat-Signature: bgsmsofcuut1npfawqbgurfm5oaizbw6 X-HE-Tag: 1692198748-818852 X-HE-Meta: U2FsdGVkX1/boT7z51A0y08+pkRgkAVdiYIeCB1faJRvTA8Qc20VO7plSegt4EqndyFFtiBZPNDunNqmTgWlJXjlgjVBJZV0fN1zYbSCt9Gnot7dEjhX/rXd+qB7mibR4QfJvl3KmaQs7nHMXExfhgZ7tUj1cMbolUytaZ3ZUq4MwfFQL07bXx5nq9nTBVgummPT2pJ3nXaQpvLelgRGMEJ778Sad4pPYg8q537IcGMfKdjP2mse66Db+mH2MBjwOuxZiIKXMzp6b/TdiQr+21eaJ6XL03ddkmBCymdDNNzZwrUi0B3ym8n6cLrkVisBUqeVAu7eBsW6b69qyNNYJ/uCsM3STaUK8X4Os/MVzU5eIfoS9Ka74fi3CbcmpgmdYDEZ48fNjlLRFB0GJLFQ66U5uYHYy2APXql2ZLoiRqhvy53hOALEljBZl6CfepNjgZKXh3b2ebB38uOlUKfDIdZNPqm8N+Se7gkZMZIwCqVITSPNx39m+4lQhX2Fd7adGHO14McW8DoJwTj+jaK556BrY/zUXzqKzhcpSn/K2FsmKZxkSoYXrR5QBFoLIEzWopNY+rm+9IArTgi1ty4vgbYvYXmP+ccMWYRURmfBNrJuvD56YS6IUGotM/JOhBLJiAHH8gL65LO1BiKlQTgbnkXCJE31vK8v7AvN4j53Rui6Z9yruT/Wh5Dv6RcmKeNWHXjVGJ50l1hHf7V8vk/w7U2zVMZtVdnh0QxAC1W1oooLLUj8nFXZJAy20j3r9zVvGhkroA5boqJ5/zay6hzfta74a91FlT56YfSKu1K5oa1zJ8CmsOeGwmRFPUYrJhEwzRndhnLrqf/uMjA0CsY1K5RVFWA7NIiVKOYCCHy1Up0l+r4x9eSBipv6B/uqJljTQ0DdH8Ar4HVOul5MWbXYqrk4OE4lMJfYzGybPYdhsuX+caRtgo6scNHiCQQ4/NHAzeMR4PBdqVkMKXcy2/1 DgbHSy7a r3qXn/xNXC2JCj+SRckWbMgaG02VwTOBLsUmTuT+cFbpIS2aQ2x15oFuT+ShayEmqoIM3fkyFZyjKtzFfAQ0rcyBhALZUWQTZvS/J68ZPlR9gU/xW0CsQmoa0MdUn1ji49AoGBJKBJ39SgphS9JBcf2f3r0J1+1AqmFYvmfiTG/EeriVEu4SAgQhnvIeVpiT0Y6sXysTdA7/Zx2SsrzJmUBkZ0r8oA2n4glc8 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Match folio_undo_large_rmappable(), and move the casting from page to folio into the callers (which they were largely doing anyway). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/huge_mm.h | 4 ++-- mm/huge_memory.c | 4 +--- mm/khugepaged.c | 2 +- mm/mempolicy.c | 15 ++++++++------- mm/page_alloc.c | 7 ++++--- 5 files changed, 16 insertions(+), 16 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index f351c3f9d58b..6d812b8856c8 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -143,7 +143,7 @@ bool hugepage_vma_check(struct vm_area_struct *vma, unsigned long vm_flags, unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); -void prep_transhuge_page(struct page *page); +void folio_prep_large_rmappable(struct folio *folio); bool can_split_folio(struct folio *folio, int *pextra_pins); int split_huge_page_to_list(struct page *page, struct list_head *list); static inline int split_huge_page(struct page *page) @@ -283,7 +283,7 @@ static inline bool hugepage_vma_check(struct vm_area_struct *vma, return false; } -static inline void prep_transhuge_page(struct page *page) {} +static inline void folio_prep_large_rmappable(struct folio *folio) {} #define transparent_hugepage_flags 0UL diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9598bbe6c792..04664e6918c1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -577,10 +577,8 @@ struct deferred_split *get_deferred_split_queue(struct folio *folio) } #endif -void prep_transhuge_page(struct page *page) +void folio_prep_large_rmappable(struct folio *folio) { - struct folio *folio = (struct folio *)page; - VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); INIT_LIST_HEAD(&folio->_deferred_list); folio_set_compound_dtor(folio, TRANSHUGE_PAGE_DTOR); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index bb76a5d454de..a8e0eca2cd1e 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -896,7 +896,7 @@ static bool hpage_collapse_alloc_page(struct page **hpage, gfp_t gfp, int node, return false; } - prep_transhuge_page(*hpage); + folio_prep_large_rmappable((struct folio *)*hpage); count_vm_event(THP_COLLAPSE_ALLOC); return true; } diff --git a/mm/mempolicy.c b/mm/mempolicy.c index c53f8beeb507..4afbb67ccf27 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2189,9 +2189,9 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, mpol_cond_put(pol); gfp |= __GFP_COMP; page = alloc_page_interleave(gfp, order, nid); - if (page && order > 1) - prep_transhuge_page(page); folio = (struct folio *)page; + if (folio && order > 1) + folio_prep_large_rmappable(folio); goto out; } @@ -2202,9 +2202,9 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, gfp |= __GFP_COMP; page = alloc_pages_preferred_many(gfp, order, node, pol); mpol_cond_put(pol); - if (page && order > 1) - prep_transhuge_page(page); folio = (struct folio *)page; + if (folio && order > 1) + folio_prep_large_rmappable(folio); goto out; } @@ -2300,10 +2300,11 @@ EXPORT_SYMBOL(alloc_pages); struct folio *folio_alloc(gfp_t gfp, unsigned order) { struct page *page = alloc_pages(gfp | __GFP_COMP, order); + struct folio *folio = (struct folio *)page; - if (page && order > 1) - prep_transhuge_page(page); - return (struct folio *)page; + if (folio && order > 1) + folio_prep_large_rmappable(folio); + return folio; } EXPORT_SYMBOL(folio_alloc); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0dbc2ecdefa5..5ee4dc9318b7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4548,10 +4548,11 @@ struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid, { struct page *page = __alloc_pages(gfp | __GFP_COMP, order, preferred_nid, nodemask); + struct folio *folio = (struct folio *)page; - if (page && order > 1) - prep_transhuge_page(page); - return (struct folio *)page; + if (folio && order > 1) + folio_prep_large_rmappable(folio); + return folio; } EXPORT_SYMBOL(__folio_alloc); From patchwork Wed Aug 16 15:11:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13355394 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 627C1C04E69 for ; Wed, 16 Aug 2023 15:14:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EEABC280024; Wed, 16 Aug 2023 11:14:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E9AB6280021; Wed, 16 Aug 2023 11:14:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D63F7280024; Wed, 16 Aug 2023 11:14:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C8119280021 for ; Wed, 16 Aug 2023 11:14:03 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 932281A01AD for ; Wed, 16 Aug 2023 15:14:03 +0000 (UTC) X-FDA: 81130313166.06.1D22410 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id 97B49400E4 for ; Wed, 16 Aug 2023 15:12:22 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=tmVq5saA; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692198742; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+wksrpuMSy6CV2cZEpXEutvCbbKjfqW/cPXVAdCuyuo=; b=lZtQqfa6Jw2KukljRzzl6em6JdGsPngd9dTf5ixDADfu3uiodie8g19mC1V7YE1eLHxPLY 24CGEbj+WDlitUwYycu3EwLUcjfEJPgeZX96kCSMjghl4nxx8cKH9iT4HdqwpvxdolVo4z zk4HV/ogJf1QKiX0n5uSqZuNa0oblRs= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=tmVq5saA; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692198742; a=rsa-sha256; cv=none; b=J3B8qD+tthjkjQd3snZ9qPRXGw3B3dlUPbo5BgXTHslji18WI28UiaFXkKKvuURxt1CxPJ y1ZMG4v96LfVR6uaRnVM+IYJz25L5sLIC4KEeuw4frCGGknn/Xc+QvvTpjvz4LyMc8ijDM WWLC9JLk2zTWXZUDLPLflGLaoZ9GHZA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+wksrpuMSy6CV2cZEpXEutvCbbKjfqW/cPXVAdCuyuo=; b=tmVq5saABvN00mJkJdnS+Y3+mY qLOyK1ue+l62Dp2ifNrh2b+/xawwxv/Vc392QDMTJldeyilfF+4bpevB+uIuNHCzC5qWUzM8YMrYF hEovgk7pX/aC5SKQUicswH5ePEPsk8hamGziReIrC5g4Y84ZVtES8cIk/6jytSqrHrNx3ClAMWV0x QHlFrNaO0V2Gyfls4nsXbXTep3O/o9xsQxPaK34Q7oGtuwufcXA7fDDRJRbKu/oWQd75n7FgvMSEg HSJWAzzxU1cpzIJXfdemzELk+aZnSDbm7A6rUT8Hh+gJNV5L15ufqFBXIaaBpc+o8e/LuLbFiXxcb PTcAKx/w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qWIBu-00FL8a-C0; Wed, 16 Aug 2023 15:12:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Jens Axboe , io-uring@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 06/13] mm: Remove free_compound_page() and the compound_page_dtors array Date: Wed, 16 Aug 2023 16:11:54 +0100 Message-Id: <20230816151201.3655946-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230816151201.3655946-1-willy@infradead.org> References: <20230816151201.3655946-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 97B49400E4 X-Stat-Signature: woy3zwb881hsez9jxkd4k7syhrznfsua X-Rspam-User: X-HE-Tag: 1692198742-517959 X-HE-Meta: U2FsdGVkX1/WhAZvCjX5WkbFAq+5Poj/ghhcs0nd3hmt+N9vf/7RtZE4koNuf1IRbmoYKinAEkq4QVTJMzGIZCecMaUAovHuF4eMfdWhHeIwu4ne1kgQTJpcoLuw7bC1mY//VrIeZg4p7qsmpkT1K1Qn94zr8SbwK38K+YS18F+EZT9/TqR+/B2kYwWIWLKzYWVlGxT1h2Q/4KEHkYIi/6FHfQOmsBArHOvNpmMADkQMggYkJqDX23RNtMp964+Kc49+7HGaP1ClyWk+q/bc2O0NsEkFv4Vd1Xo4DQxmc0v+dFpgK0m0xVHdSSnyTdyy4U0ylBd5uM/tVAlDa3sFMKtyI+r1cQSoNYV+vLxS6Zsz/rYBYs8FrcZuLuRSafyFtQDgJ28q5frWoaSX7MnYZf22RrnrpCC3pC8Cu2waSkUga+kQul6zWpvKmyb4t58VPfVS1n9YVougOo9VeCOQFFUz3DOv0CQKt5aXj6c/BU4r0ISRKT3KaTj3lgxqH+FTJkOF70WHAzK0pfAjg5MARgtdFuw+2uGDikpNrb5pc0X9QyK+1f8WrBhsxSGnzbT3zOhm0Wi139e8TgBiXXzr0jdEA+PCjsYKHVjRIVImtMM7oWWEdazD3GZ10Az/rXh+pZo4at1j9ls7wV05OOgxCZToiIXfypptQyP552gBCPXRBWW+ZHsp9YmDrKEF5ctwvy4fCV4hVoWU5ADUGkhfNCcB6CWPoW6rZvJcuAjKaZq4UX+6t+1Ckp3fghpPfmP8whdGng5ZK8PRcz8pC7cyXS9kvJRqokadBNzCJzjbGiutJdA+ZK9AkP53UupUsqQ4d/7rzIE5XOgotevCOYdpSFcHRhq9QOeiQ5qvqK8jT8kORCR8Qo0BQD/axBFXBm9p4mWacOzUVbexpsWibQy5yqXXbevKOWlNyGKBgxV3Kgi/s7pT7Tdk5aFkpq36vsAMPCUVf0hpdcTRbYDVLxP Zdbf9VhC z5z75N8o6rkUQs18faaydVB3QgaAN5uTaaAzqia1A6nE+nW6jfg/ppAdOqggWcfGr8P4nbyhiIJ3UkRWuqy8khuOdmGVBclnRS+Op5lIILF3JnUviCXcY4paEZ6eqbklo4/F/JrRBTPbhw/Pcei53/i2M3CYNlSJzQpE5a2XDRou6D7UXkQ0XgCBSVrLnEm4Tneaj3XQuwIFNVF6Np0xsJ4dOStuy0mk1OMcOcjCTZ0n63e0EOnBaEp0VDVb74zvELPUT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The only remaining destructor is free_compound_page(). Inline it into destroy_large_folio() and remove the array it used to live in. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 10 ---------- mm/page_alloc.c | 24 +++++------------------- 2 files changed, 5 insertions(+), 29 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6c338b65b86b..7b800d1298dc 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1267,14 +1267,6 @@ void folio_copy(struct folio *dst, struct folio *src); unsigned long nr_free_buffer_pages(void); -/* - * Compound pages have a destructor function. Provide a - * prototype for that function and accessor functions. - * These are _only_ valid on the head of a compound page. - */ -typedef void compound_page_dtor(struct page *); - -/* Keep the enum in sync with compound_page_dtors array in mm/page_alloc.c */ enum compound_dtor_id { NULL_COMPOUND_DTOR, COMPOUND_PAGE_DTOR, @@ -1327,8 +1319,6 @@ static inline unsigned long thp_size(struct page *page) return PAGE_SIZE << thp_order(page); } -void free_compound_page(struct page *page); - #ifdef CONFIG_MMU /* * Do pte_mkwrite, but only if the vma says VM_WRITE. We do this when diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5ee4dc9318b7..9638fdddf065 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -284,11 +284,6 @@ const char * const migratetype_names[MIGRATE_TYPES] = { #endif }; -static compound_page_dtor * const compound_page_dtors[NR_COMPOUND_DTORS] = { - [NULL_COMPOUND_DTOR] = NULL, - [COMPOUND_PAGE_DTOR] = free_compound_page, -}; - int min_free_kbytes = 1024; int user_min_free_kbytes = -1; static int watermark_boost_factor __read_mostly = 15000; @@ -587,19 +582,13 @@ static inline void free_the_page(struct page *page, unsigned int order) * The remaining PAGE_SIZE pages are called "tail pages". PageTail() is encoded * in bit 0 of page->compound_head. The rest of bits is pointer to head page. * - * The first tail page's ->compound_dtor holds the offset in array of compound - * page destructors. See compound_page_dtors. + * The first tail page's ->compound_dtor describes how to destroy the + * compound page. * * The first tail page's ->compound_order holds the order of allocation. * This usage means that zero-order pages may not be compound. */ -void free_compound_page(struct page *page) -{ - mem_cgroup_uncharge(page_folio(page)); - free_the_page(page, compound_order(page)); -} - void prep_compound_page(struct page *page, unsigned int order) { int i; @@ -621,14 +610,11 @@ void destroy_large_folio(struct folio *folio) return; } - if (folio_test_transhuge(folio) && dtor == TRANSHUGE_PAGE_DTOR) { + if (folio_test_transhuge(folio) && dtor == TRANSHUGE_PAGE_DTOR) folio_undo_large_rmappable(folio); - free_compound_page(&folio->page); - return; - } - VM_BUG_ON_FOLIO(dtor >= NR_COMPOUND_DTORS, folio); - compound_page_dtors[dtor](&folio->page); + mem_cgroup_uncharge(folio); + free_the_page(&folio->page, folio_order(folio)); } static inline void set_buddy_order(struct page *page, unsigned int order) From patchwork Wed Aug 16 15:11:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13355386 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39D5BC07E8B for ; Wed, 16 Aug 2023 15:13:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B6750280022; Wed, 16 Aug 2023 11:13:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B17B2280021; Wed, 16 Aug 2023 11:13:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E001280022; Wed, 16 Aug 2023 11:13:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 8EBFA280021 for ; Wed, 16 Aug 2023 11:13:02 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 2287F802AC for ; Wed, 16 Aug 2023 15:13:02 +0000 (UTC) X-FDA: 81130310604.15.BBE75DE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id 91EF7400D9 for ; Wed, 16 Aug 2023 15:12:24 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=EQoJabie; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692198744; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=d5h4S2y+YltKDNvOm19F9OUGylDTy6Ryq5upqyiDXRE=; b=kO7ZWInhRp60bfM8NWZR/YWGHmNtHC9YrsXnxBiyRVp6CfV7oE3NJdTmLozYbO75P7jqiy XBkuZ2mWuJOpxJD+i7DYzLSJKYBQlRZxDcZjgzyFP6jYlWe1r0ACAFIgbYo+7/EXJiOBZ7 vH2XHlAyobUIOB2OTMvh02vJufZ3BkA= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=EQoJabie; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692198744; a=rsa-sha256; cv=none; b=zb4uD0OyGi3fgzfNPE/SLug2Mi45SHoLzglVi43fUBPtHMnxPqNonCFIIRvEmcU0wkGlz2 n8cAIPB7EeQQX5BPNpyXy4S/84z3qNx7GsuifQCayLj+3cL0swj7t0bIl250Qzx09bhBze +Wza86AJq60m0xqiX5BH0DPAp8LGEXE= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=d5h4S2y+YltKDNvOm19F9OUGylDTy6Ryq5upqyiDXRE=; b=EQoJabietG+7Znr2TPFURSO4A7 uZUAqir164dW0b3lNiNjAkblxhvr6aLwfoKnF/5yPzhlAmSqXvxgtN4NxRKmOIML40q/srn9/55IB wNsWn/UZXDSvHtaUOjAnpfG//AUVKubFNZwGogoJIRfFI7hs2SnE6wxi1YQ6WkW5e/Z31XgaqaYgM mIj8i7E7LtwDd6sO3T2S9QoKk1ICA5fkp0CziyAVNAL+/WIN8U2wrswxXXYQqblio8fE9ajmNC0kU K0an/POrGoOc0PItpK214vVs9iqSt0e3gw9IoOkrlhS76GH/4NXVOr8sOFlHdSXRJrgjktEbbQAJz ve6uvjEA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qWIBu-00FL8c-Ee; Wed, 16 Aug 2023 15:12:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Jens Axboe , io-uring@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 07/13] mm: Remove HUGETLB_PAGE_DTOR Date: Wed, 16 Aug 2023 16:11:55 +0100 Message-Id: <20230816151201.3655946-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230816151201.3655946-1-willy@infradead.org> References: <20230816151201.3655946-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 91EF7400D9 X-Stat-Signature: pdx7sm8bohcj8zd1oee97d74ic8x51ig X-Rspam-User: X-HE-Tag: 1692198744-151978 X-HE-Meta: U2FsdGVkX19dSc8ksABgefYpSGlHN3P4jLK0U/SE0geSUZHI5XgEciNlHdaOO1eMMtH++Jebc77hz8E2pDGfoD5VebFehtn+yQ6iVILCORwmbiethePZG2TsDaygNFNwA+pfxsbHQbhFEdB31g2WxV/fSLZvgbFl+9A7ugket+sItWHq+BZ3ftPyJ08RIPhiEsbRh95ISPTvs4s0P54P54Nou/kFST22MS/pZfjCG+4/3SLOcJGUYMm32Uze1v2m1DSzIyWR1iLps6KUCK203jWR1bpYy7Z5EpGd99nqPcZ+4mKHP538y/5QKTypmS3DYlVkvPUeoATqgfTn2oueKLGGRwDwVBducavhMOJ1myRhKi/OYB83pQXNQZ3eNkW7zetT4ljnSqD+use3Jupr4+Cckd22S+sM2FCPqeHi3HiT1geUcCHKkm4Wfj0U+oSdQWn85gy73y6xPwG5SB7oObiG+UkmlGlhu/a3s6rPuCsqy7XRXlEUpV7HSv13Pip9khX/nn/t18e3V30rYBLY8fVurkb2q/fs6enaUSJpnUaaL95pUQJDOUucGHVUOulLINgh4pAjTxH3MLwauKAeWq4PvgA+r3Ugw9BEIb8QHZnMucIYYFEwYC2bZO6MeNds+5cbjHX4GYVckCnTX3ymUWqgDmjeywcB9nEWJHme7WoLNBuvWUp3mXJ7iDi7PAFccwBOniSBHtKwZdWRySOzWlZvcFQOzbDwNez7mEetJU5r2liO74rijfn2gLgWKiNyq7ulW9qkfifjjLs3EH/aF84896qoAXNPfUUwb5jvy2rOB1+bpQUWzlssu3yzYl3bdA6Y5tHm7J1+b+G22mWK8LH9mFMpxVvtDxIHnYpIotnugAL3K/MMf9W68ELL+cMFzCV/BmPcjkPI0HgAtsrO91y8fW0b9qR3ImbMsO2aqSAXp/VhSZnEpmOCuJzVIMmjNpiZ7ItJ8G6+5tapmN4 scHZ5eJw FWo/PUKvb3a3m2hPRSZqx5xk13S16QyrGUrbi+hwAtSTuMVuqYN60DVEWAKikzO+hVaVKlCREJKoA6LSNf5X1YwxB0N9nPB2vpDWj9TcnqA9nb43jgKdlLyU7GNRccvKlSWqLD/jJMXtnWppc7MpHVKbb5qY0sDITK9rw6mLGz026JdjQRjubugDd03y777JNm8kdabI3yZ16kGXUwfrff69+GD+dirgIWuZ3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We can use a bit in page[1].flags to indicate that this folio belongs to hugetlb instead of using a value in page[1].dtors. That lets folio_test_hugetlb() become an inline function like it should be. We can also get rid of NULL_COMPOUND_DTOR. Signed-off-by: Matthew Wilcox (Oracle) --- .../admin-guide/kdump/vmcoreinfo.rst | 10 +--- include/linux/mm.h | 4 -- include/linux/page-flags.h | 43 ++++++++++++---- kernel/crash_core.c | 2 +- mm/hugetlb.c | 49 +++---------------- mm/page_alloc.c | 2 +- 6 files changed, 43 insertions(+), 67 deletions(-) diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst index c18d94fa6470..baa1c355741d 100644 --- a/Documentation/admin-guide/kdump/vmcoreinfo.rst +++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst @@ -325,8 +325,8 @@ NR_FREE_PAGES On linux-2.6.21 or later, the number of free pages is in vm_stat[NR_FREE_PAGES]. Used to get the number of free pages. -PG_lru|PG_private|PG_swapcache|PG_swapbacked|PG_slab|PG_hwpoision|PG_head_mask ------------------------------------------------------------------------------- +PG_lru|PG_private|PG_swapcache|PG_swapbacked|PG_slab|PG_hwpoision|PG_head_mask|PG_hugetlb +----------------------------------------------------------------------------------------- Page attributes. These flags are used to filter various unnecessary for dumping pages. @@ -338,12 +338,6 @@ More page attributes. These flags are used to filter various unnecessary for dumping pages. -HUGETLB_PAGE_DTOR ------------------ - -The HUGETLB_PAGE_DTOR flag denotes hugetlbfs pages. Makedumpfile -excludes these pages. - x86_64 ====== diff --git a/include/linux/mm.h b/include/linux/mm.h index 7b800d1298dc..642f5fe5860e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1268,11 +1268,7 @@ void folio_copy(struct folio *dst, struct folio *src); unsigned long nr_free_buffer_pages(void); enum compound_dtor_id { - NULL_COMPOUND_DTOR, COMPOUND_PAGE_DTOR, -#ifdef CONFIG_HUGETLB_PAGE - HUGETLB_PAGE_DTOR, -#endif TRANSHUGE_PAGE_DTOR, NR_COMPOUND_DTORS, }; diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 92a2063a0a23..aeecf0cf1456 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -171,15 +171,6 @@ enum pageflags { /* Remapped by swiotlb-xen. */ PG_xen_remapped = PG_owner_priv_1, -#ifdef CONFIG_MEMORY_FAILURE - /* - * Compound pages. Stored in first tail page's flags. - * Indicates that at least one subpage is hwpoisoned in the - * THP. - */ - PG_has_hwpoisoned = PG_error, -#endif - /* non-lru isolated movable page */ PG_isolated = PG_reclaim, @@ -190,6 +181,15 @@ enum pageflags { /* For self-hosted memmap pages */ PG_vmemmap_self_hosted = PG_owner_priv_1, #endif + + /* + * Flags only valid for compound pages. Stored in first tail page's + * flags word. + */ + + /* At least one page in this folio has the hwpoison flag set */ + PG_has_hwpoisoned = PG_error, + PG_hugetlb = PG_active, }; #define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1) @@ -812,7 +812,23 @@ static inline void ClearPageCompound(struct page *page) #ifdef CONFIG_HUGETLB_PAGE int PageHuge(struct page *page); -bool folio_test_hugetlb(struct folio *folio); +SETPAGEFLAG(HugeTLB, hugetlb, PF_SECOND) +CLEARPAGEFLAG(HugeTLB, hugetlb, PF_SECOND) + +/** + * folio_test_hugetlb - Determine if the folio belongs to hugetlbfs + * @folio: The folio to test. + * + * Context: Any context. Caller should have a reference on the folio to + * prevent it from being turned into a tail page. + * Return: True for hugetlbfs folios, false for anon folios or folios + * belonging to other filesystems. + */ +static inline bool folio_test_hugetlb(struct folio *folio) +{ + return folio_test_large(folio) && + test_bit(PG_hugetlb, folio_flags(folio, 1)); +} #else TESTPAGEFLAG_FALSE(Huge, hugetlb) #endif @@ -1040,6 +1056,13 @@ static __always_inline void __ClearPageAnonExclusive(struct page *page) #define PAGE_FLAGS_CHECK_AT_PREP \ ((PAGEFLAGS_MASK & ~__PG_HWPOISON) | LRU_GEN_MASK | LRU_REFS_MASK) +/* + * Flags stored in the second page of a compound page. They may overlap + * the CHECK_AT_FREE flags above, so need to be cleared. + */ +#define PAGE_FLAGS_SECOND \ + (1UL << PG_has_hwpoisoned | 1UL << PG_hugetlb) + #define PAGE_FLAGS_PRIVATE \ (1UL << PG_private | 1UL << PG_private_2) /** diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 90ce1dfd591c..dd5f87047d06 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -490,7 +490,7 @@ static int __init crash_save_vmcoreinfo_init(void) #define PAGE_BUDDY_MAPCOUNT_VALUE (~PG_buddy) VMCOREINFO_NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE); #ifdef CONFIG_HUGETLB_PAGE - VMCOREINFO_NUMBER(HUGETLB_PAGE_DTOR); + VMCOREINFO_NUMBER(PG_hugetlb); #define PAGE_OFFLINE_MAPCOUNT_VALUE (~PG_offline) VMCOREINFO_NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE); #endif diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 086eb51bf845..389490f100b0 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1585,25 +1585,7 @@ static inline void __clear_hugetlb_destructor(struct hstate *h, { lockdep_assert_held(&hugetlb_lock); - /* - * Very subtle - * - * For non-gigantic pages set the destructor to the normal compound - * page dtor. This is needed in case someone takes an additional - * temporary ref to the page, and freeing is delayed until they drop - * their reference. - * - * For gigantic pages set the destructor to the null dtor. This - * destructor will never be called. Before freeing the gigantic - * page destroy_compound_gigantic_folio will turn the folio into a - * simple group of pages. After this the destructor does not - * apply. - * - */ - if (hstate_is_gigantic(h)) - folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR); - else - folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR); + folio_clear_hugetlb(folio); } /* @@ -1690,7 +1672,7 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio, h->surplus_huge_pages_node[nid]++; } - folio_set_compound_dtor(folio, HUGETLB_PAGE_DTOR); + folio_set_hugetlb(folio); folio_change_private(folio, NULL); /* * We have to set hugetlb_vmemmap_optimized again as above @@ -1814,9 +1796,8 @@ static void free_hpage_workfn(struct work_struct *work) /* * The VM_BUG_ON_FOLIO(!folio_test_hugetlb(folio), folio) in * folio_hstate() is going to trigger because a previous call to - * remove_hugetlb_folio() will call folio_set_compound_dtor - * (folio, NULL_COMPOUND_DTOR), so do not use folio_hstate() - * directly. + * remove_hugetlb_folio() will clear the hugetlb bit, so do + * not use folio_hstate() directly. */ h = size_to_hstate(page_size(page)); @@ -1955,7 +1936,7 @@ static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio) { hugetlb_vmemmap_optimize(h, &folio->page); INIT_LIST_HEAD(&folio->lru); - folio_set_compound_dtor(folio, HUGETLB_PAGE_DTOR); + folio_set_hugetlb(folio); hugetlb_set_folio_subpool(folio, NULL); set_hugetlb_cgroup(folio, NULL); set_hugetlb_cgroup_rsvd(folio, NULL); @@ -2070,28 +2051,10 @@ int PageHuge(struct page *page) if (!PageCompound(page)) return 0; folio = page_folio(page); - return folio->_folio_dtor == HUGETLB_PAGE_DTOR; + return folio_test_hugetlb(folio); } EXPORT_SYMBOL_GPL(PageHuge); -/** - * folio_test_hugetlb - Determine if the folio belongs to hugetlbfs - * @folio: The folio to test. - * - * Context: Any context. Caller should have a reference on the folio to - * prevent it from being turned into a tail page. - * Return: True for hugetlbfs folios, false for anon folios or folios - * belonging to other filesystems. - */ -bool folio_test_hugetlb(struct folio *folio) -{ - if (!folio_test_large(folio)) - return false; - - return folio->_folio_dtor == HUGETLB_PAGE_DTOR; -} -EXPORT_SYMBOL_GPL(folio_test_hugetlb); - /* * Find and lock address space (mapping) in write mode. * diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9638fdddf065..f8e276de4fd5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1122,7 +1122,7 @@ static __always_inline bool free_pages_prepare(struct page *page, VM_BUG_ON_PAGE(compound && compound_order(page) != order, page); if (compound) - ClearPageHasHWPoisoned(page); + page[1].flags &= ~PAGE_FLAGS_SECOND; for (i = 1; i < (1 << order); i++) { if (compound) bad += free_tail_page_prepare(page, page + i); From patchwork Wed Aug 16 15:11:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13355389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A61EC04A6A for ; Wed, 16 Aug 2023 15:13:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1A47E280026; Wed, 16 Aug 2023 11:13:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1574F280021; Wed, 16 Aug 2023 11:13:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F3773280026; Wed, 16 Aug 2023 11:13:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id E4810280021 for ; Wed, 16 Aug 2023 11:13:16 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 8EB7FB296A for ; Wed, 16 Aug 2023 15:13:16 +0000 (UTC) X-FDA: 81130311192.30.CE120F5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id 35739401B1 for ; Wed, 16 Aug 2023 15:12:37 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=crMLMxIt; dmarc=none; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692198758; a=rsa-sha256; cv=none; b=BthCJUaRjO5T8Xuc4g65hsxKwWeYxfAbBwCkDdS/zRDgnoEPWJNaJTy1TXI4sgKEwOtHRs U455VRjxb4/CkmtEOqQAoZCjpGFKD8psLdyPOkyW01LJk6LYu3aTARbutY/5cAKtzDsG24 gWhm8SqkV4WxHpk6KWB9Q3h1gvXTp4k= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=crMLMxIt; dmarc=none; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692198758; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=D7lpiBJxqamGLQ0V34zZ/ZLZ7rblS7AEFq4ixwWOreE=; b=trtTWtZvME41ACypog6rwjIM3zo4x3+fBh4mKcZ3tNSw6uOx7J8sOEN1o3E9c5zHCfWlku +690QJ1tKpf0RgNbnwPfxtunUUhfxUiLteIUmJhMIq2g1Po4LA6C9WgXWPXwHAhINl0N9i R8k2JbMC2E7n5hxICHdITpIl//j/eZc= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=D7lpiBJxqamGLQ0V34zZ/ZLZ7rblS7AEFq4ixwWOreE=; b=crMLMxItw7rkFzvfYZVu83t4Te EErKxqDP16qUFf7uP9ThpSWzMpAk73QwaRBYqKAXYTQubX+5E3Sq2BMgKnF5RmcaiK6AX/xdY4LnZ xhLstz+32jcyPbBvNfwGSMgYOoYgsp0Osg6KdXs104oIA7LE6v7moPESIr3nqyhbMuXuHg0H2Xqnb uZzVb3/HVf0X7almE/N+OpgBN0q7fg8Zioh2duj05S2KTPSXb6WBGqCJ6gQcki0N8BGZkrGkhSH5l xnHhek7TNhRZ4yyWFDvgx1pdvC9RW1pGuxmvt5f76xeefIKe9ICQ58dns/blN4zNxh9kxjOzdppYJ 77YGfSDA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qWIBu-00FL8e-Hi; Wed, 16 Aug 2023 15:12:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Jens Axboe , io-uring@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 08/13] mm: Add large_rmappable page flag Date: Wed, 16 Aug 2023 16:11:56 +0100 Message-Id: <20230816151201.3655946-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230816151201.3655946-1-willy@infradead.org> References: <20230816151201.3655946-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 35739401B1 X-Stat-Signature: 37fmpkqwrjckt49wf3w45kxmqezz4yex X-HE-Tag: 1692198757-747738 X-HE-Meta: U2FsdGVkX1/UQUlRNO7UGpfK0EsNmHwOWbF3cVZOxsXHcoBzhuwlo097B1eRXodtxHISbh330fdPUARqNRfjsfktnMpzsq3jMAs+CKsDWdonqW8v+gxWC2DtyIKeGnPTl2NSzdRN6ooIVuG9HcOdxoKKdv48ZYXZjbJn80othByP0okPvlx+Tt8VHfDU1UaUsZCj7rtA/djGgoyQ+o3oGKACFWNNNTaNrZ1vVbTrzl8GiYnIS5bqTYIO1GRh3RQ4a4WD0jxABdnCLnMQcDOhoYsVnbW5oZHwt/kbwZz6uEtqKWshBfo37Y9p8SH5pkuiQYp8FDYGA6pzezmCq5SKNWRbLC8S4FdWZKGSt9CM+0UfNuITykifpl6ezkUuBcGZYe4phW1ipDy7ti3jK+t8HqP9t4/xc17cTvR9hStmZFNwR4DaDXv0Fp2qkjOkJglskdSpOs2HRJ2cx7OQmO4uidV1hShGILVitRCQKcGD+6QzCaw9rPK3qnTbVbJTDFc/YzWst7T/PNJXG7bT+6ywcwUJnImuo1JzJkFR/Zc7VuNvzO136DejMT4gjy9AtTa6d6jQ66r3B6zFQiYBSTLIcABlRVHwXlFhL02Joqx4j0qckRhoNfbr+5B6RQBwuDZSiRtqGb84dbG93unLZ4nWJOOZMwD0WIsJQtiuCoe0jj1Cky+QlzOe5D2wAh3K2y0VYcNMCXiCdkkGbrRK7CeWS/KGbk83lUBER53T2lEgArOZkx2o7w7HI9pOddbXwxRvsYcWqhaWgO844M8iw7Bbik3sjQx23XL8aK5fNbGjnvxvkHVshY7woJU5pLFfGniKTkDzdovyfEcQdQ6pzEZGxo4SuDteWKCgPeGh8CjZjzZldFqfPy0xJ3R4xRfk6Ppv/0YIyN3inr5ofH+8Clh5cAc1ciQF1Amx7Msmd1fjWcvFNIgQH9NGw2CkdT3LbIwXgZkfmmJ7mvPi6bYZF+R dd4aoSUO N0GqgCgrjr3DPjCrvioSQ7tgYBxDd0nTxDC+9Zi9BnSELWOKtSlUYhxaQLsQoM2KBHl1+TxIN+rTFjok9TMkYsLGpcRjxh71qQT8Ew9xsjqFJpCBMTYHRHMroE9Gl5wlbDU5eMtSt19vhCqooE1PpnUbQvZKksj2UI9FYDGwJTY8fahNMy142K5bvlwNkAg7hfPDuvaQO6rTb+d3/V50u7VCOiA4adeDZhTag X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Stored in the first tail page's flags, this flag replaces the destructor. That removes the last of the destructors, so remove all references to folio_dtor and compound_dtor. Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/admin-guide/kdump/vmcoreinfo.rst | 4 ++-- include/linux/mm.h | 13 ------------- include/linux/mm_types.h | 2 -- include/linux/page-flags.h | 7 ++++++- kernel/crash_core.c | 1 - mm/huge_memory.c | 4 ++-- mm/internal.h | 1 - mm/page_alloc.c | 7 +------ 8 files changed, 11 insertions(+), 28 deletions(-) diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst index baa1c355741d..3bd38ac0e7de 100644 --- a/Documentation/admin-guide/kdump/vmcoreinfo.rst +++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst @@ -141,8 +141,8 @@ nodemask_t The size of a nodemask_t type. Used to compute the number of online nodes. -(page, flags|_refcount|mapping|lru|_mapcount|private|compound_dtor|compound_order|compound_head) -------------------------------------------------------------------------------------------------- +(page, flags|_refcount|mapping|lru|_mapcount|private|compound_order|compound_head) +---------------------------------------------------------------------------------- User-space tools compute their values based on the offset of these variables. The variables are used when excluding unnecessary pages. diff --git a/include/linux/mm.h b/include/linux/mm.h index 642f5fe5860e..cf0ae8c51d7f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1267,19 +1267,6 @@ void folio_copy(struct folio *dst, struct folio *src); unsigned long nr_free_buffer_pages(void); -enum compound_dtor_id { - COMPOUND_PAGE_DTOR, - TRANSHUGE_PAGE_DTOR, - NR_COMPOUND_DTORS, -}; - -static inline void folio_set_compound_dtor(struct folio *folio, - enum compound_dtor_id compound_dtor) -{ - VM_BUG_ON_FOLIO(compound_dtor >= NR_COMPOUND_DTORS, folio); - folio->_folio_dtor = compound_dtor; -} - void destroy_large_folio(struct folio *folio); /* Returns the number of bytes in this potentially compound page. */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index da538ff68953..d45a2b8041e0 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -282,7 +282,6 @@ static inline struct page *encoded_page_ptr(struct encoded_page *page) * @_refcount: Do not access this member directly. Use folio_ref_count() * to find how many references there are to this folio. * @memcg_data: Memory Control Group data. - * @_folio_dtor: Which destructor to use for this folio. * @_folio_order: Do not use directly, call folio_order(). * @_entire_mapcount: Do not use directly, call folio_entire_mapcount(). * @_nr_pages_mapped: Do not use directly, call folio_mapcount(). @@ -336,7 +335,6 @@ struct folio { unsigned long _flags_1; unsigned long _head_1; /* public: */ - unsigned char _folio_dtor; unsigned char _folio_order; atomic_t _entire_mapcount; atomic_t _nr_pages_mapped; diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index aeecf0cf1456..732d13c708e7 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -190,6 +190,7 @@ enum pageflags { /* At least one page in this folio has the hwpoison flag set */ PG_has_hwpoisoned = PG_error, PG_hugetlb = PG_active, + PG_large_rmappable = PG_workingset, /* anon or file-backed */ }; #define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1) @@ -806,6 +807,9 @@ static inline void ClearPageCompound(struct page *page) BUG_ON(!PageHead(page)); ClearPageHead(page); } +PAGEFLAG(LargeRmappable, large_rmappable, PF_SECOND) +#else +TESTPAGEFLAG_FALSE(LargeRmappable, large_rmappable) #endif #define PG_head_mask ((1UL << PG_head)) @@ -1061,7 +1065,8 @@ static __always_inline void __ClearPageAnonExclusive(struct page *page) * the CHECK_AT_FREE flags above, so need to be cleared. */ #define PAGE_FLAGS_SECOND \ - (1UL << PG_has_hwpoisoned | 1UL << PG_hugetlb) + (1UL << PG_has_hwpoisoned | 1UL << PG_hugetlb | \ + 1UL << PG_large_rmappable) #define PAGE_FLAGS_PRIVATE \ (1UL << PG_private | 1UL << PG_private_2) diff --git a/kernel/crash_core.c b/kernel/crash_core.c index dd5f87047d06..934dd86e19f5 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -455,7 +455,6 @@ static int __init crash_save_vmcoreinfo_init(void) VMCOREINFO_OFFSET(page, lru); VMCOREINFO_OFFSET(page, _mapcount); VMCOREINFO_OFFSET(page, private); - VMCOREINFO_OFFSET(folio, _folio_dtor); VMCOREINFO_OFFSET(folio, _folio_order); VMCOREINFO_OFFSET(page, compound_head); VMCOREINFO_OFFSET(pglist_data, node_zones); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 04664e6918c1..c721f7ec5b6a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -581,7 +581,7 @@ void folio_prep_large_rmappable(struct folio *folio) { VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); INIT_LIST_HEAD(&folio->_deferred_list); - folio_set_compound_dtor(folio, TRANSHUGE_PAGE_DTOR); + folio_set_large_rmappable(folio); } static inline bool is_transparent_hugepage(struct page *page) @@ -593,7 +593,7 @@ static inline bool is_transparent_hugepage(struct page *page) folio = page_folio(page); return is_huge_zero_page(&folio->page) || - folio->_folio_dtor == TRANSHUGE_PAGE_DTOR; + folio_test_large_rmappable(folio); } static unsigned long __thp_get_unmapped_area(struct file *filp, diff --git a/mm/internal.h b/mm/internal.h index 1e98c867f0de..9dc7629ffbc9 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -419,7 +419,6 @@ static inline void prep_compound_head(struct page *page, unsigned int order) { struct folio *folio = (struct folio *)page; - folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR); folio_set_order(folio, order); atomic_set(&folio->_entire_mapcount, -1); atomic_set(&folio->_nr_pages_mapped, 0); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f8e276de4fd5..81b1c7e3a28b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -582,9 +582,6 @@ static inline void free_the_page(struct page *page, unsigned int order) * The remaining PAGE_SIZE pages are called "tail pages". PageTail() is encoded * in bit 0 of page->compound_head. The rest of bits is pointer to head page. * - * The first tail page's ->compound_dtor describes how to destroy the - * compound page. - * * The first tail page's ->compound_order holds the order of allocation. * This usage means that zero-order pages may not be compound. */ @@ -603,14 +600,12 @@ void prep_compound_page(struct page *page, unsigned int order) void destroy_large_folio(struct folio *folio) { - enum compound_dtor_id dtor = folio->_folio_dtor; - if (folio_test_hugetlb(folio)) { free_huge_folio(folio); return; } - if (folio_test_transhuge(folio) && dtor == TRANSHUGE_PAGE_DTOR) + if (folio_test_large_rmappable(folio)) folio_undo_large_rmappable(folio); mem_cgroup_uncharge(folio); From patchwork Wed Aug 16 15:11:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13355392 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5971C04A6A for ; Wed, 16 Aug 2023 15:13:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DDAC2280029; Wed, 16 Aug 2023 11:13:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D3B38280021; Wed, 16 Aug 2023 11:13:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C0360280029; Wed, 16 Aug 2023 11:13:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B1085280021 for ; Wed, 16 Aug 2023 11:13:37 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8AB4FC04AD for ; Wed, 16 Aug 2023 15:13:37 +0000 (UTC) X-FDA: 81130312074.27.CAE86DD Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id 186CA120184 for ; Wed, 16 Aug 2023 15:12:33 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hqH+Q0gP; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692198754; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fRlSze2Vu2RAUc5LjCjDIqaAj5/iOS1M5mvXvX9PDDw=; b=bIMYAHkbSbmQ+7tk74PpzdsL5/8o+ogbATd3kaq949JHdcke/YB3r/kvvfzdOE5N9KCn9V dN7TQAI9tsFd5bTkSKq86BNz5KTv5KllYPhhpdpy5FqwRZDvvOVVPCtCAMPrxWOD2vBjTk QswSgHJygbYdL9cnr39N1iyUXaqqYOk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692198754; a=rsa-sha256; cv=none; b=Uk0RH2n9UaiY7AUBTtZst5ULNs+yB/e8++5xe7Ev+8uKaiQ6THTrkXgM1CXl1TThvlyexg HzAgQd0r8X3Ep/xgpLDu5vpYyXRHzHILaL6lwVrd+tvpdzCIndSRhEEzw+5UqUXddwVW7x XiPTRTy280gBBwfAYiOlBqfxi3PDuZc= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hqH+Q0gP; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fRlSze2Vu2RAUc5LjCjDIqaAj5/iOS1M5mvXvX9PDDw=; b=hqH+Q0gPF+1UpvSSyqUxsFf4I0 X97/VQIqchvL8dLzAOuQqjV7diMHv6PpA47rTRkvtyc+aaoSTHazv0MbfgJYGZrT6265rX3Za0llG kiVcHK9BbDrHW+izGIIVTQ4+l7K/fCH95DwfXshiXgIu5QKRcMPyWm1oOvcgYIYyu07pYV8BzDNSO iQ/KCHQ6Gnjqf5TX3PEdpyupPqQh343vZ33oCBxk0E9OhP3TWqNygV/fy1Qbpup4ISwkfaZcbPUYB k78xN3790BwfNHze1FN4ZiVhkVuSYlbnaIk0mvwObBBGQi7piqGcmNr63IKJHlObElgumTBnzwrCY M1ycGL+Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qWIBu-00FL8g-Kq; Wed, 16 Aug 2023 15:12:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Jens Axboe , io-uring@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 09/13] mm: Rearrange page flags Date: Wed, 16 Aug 2023 16:11:57 +0100 Message-Id: <20230816151201.3655946-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230816151201.3655946-1-willy@infradead.org> References: <20230816151201.3655946-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 186CA120184 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: g93esuw8u77zhcm1owerqw7mg5pyd537 X-HE-Tag: 1692198753-467097 X-HE-Meta: U2FsdGVkX1+85gnQhwL0zzF7rvvlvxsZP+XW60gcMbSeMbFxCJsrO/NTm1Ox7yG41NAVXk+JMHYeYDOgSmHg7PRG0OGNnGPJRYhYPbD0Kq0qU+a3UR2VTfOiZhK/0mFW41osyPsDDf5Qt2J7riVvlzNAb+awaYDu+RaU2OUwtl/5pTb1vhoXFMkqdbgTyOJ0GhxLGRVV3zUwcN4T2G4XbpvOCERy9RtNWQcGOcGwUmXFHcgHQSEbag2EkmGh71XKPbiaIjuVeJOzanxbn24KGCIblhjFbgU4ZG0RbvBvMku9+XwDOcYXkWE9vNjMyBiIMEw29uOftJouQBWXPKZuPE/96nO7F0YzLbAV34RHXdEThdauaMZY55L+/gg9YD9GLR3O9A/F7CgtsucRJ3wg4NHpw3ag3w9imAjkEx+96UFHjLoHmEPKhs2rJxOHxGpe2pjc25OgiE9w2vas60ekwKVhju/A3pBI1ufNCXGDta5CEgbJ5aWEYW+jKPH+zkDspryk86uYl1lshFdbg4YunWglw8BPZYErWmo97tPHBz5Z5coejDzNGv7CMREdVeEdRn51K6UxjRjsIecIVeuEeYPQ4OENrlm55F5CV3bZ+VSQfQXtQtPvJphVcVs+2Bs4lxIAagF0j5WAselKoqJwGZqu0Z4io9ffaRORFluMj7BH3GPg6s38MCQdt3cdcHwPhcm8AF2uEzLZ3Mq3iTPL8PYJecgtP0BlRnaOvr9lGt5D6lIox3EDVi4bMBW8wJ+Y9NeSmncO5rAugJKhjhj+jniL58sDgagmVj+QLQEmQUp6VJ9QcPYPT5t3CM3CVuAH6qcSEz1JmaZw9CuYzYaRye5uXAl4+hM0mPu/vBnztnRmA1ZJZ/N932BkWBDT0+wXZ52tfrb3yzoKs2LYul8vvdFbXmFBIys4Bt2equm4hxQ8kARI1kBNysn3UUQaYy8Iyi8Ko85oF/I3hmdOw6E DkyhCbeg +UFbXQ779O9lLCAGnTiMlDGea6ExZe1PS/K4KQe+vAPg34AXj7jr+LtFdj3i5DdcV7U+lR6G52tX157AQYc6qtnkffOlYj5yRaADsrWGbY6AkFKUsWNEb2muTESuPI2jPQfJyypJ9El8gLDv9v3ul9MIGguAjYt7+e2kSjzCYyk2ERKKp5rY7CWjzPw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move PG_writeback into bottom byte so that it can use PG_waiters in a later patch. Move PG_head into bottom byte as well to match with where 'order' is moving next. PG_active and PG_workingset move into the second byte to make room for them. By putting PG_head in bit 6, we ensure that it is cleared by assigning the folio order to the bottom byte of the first tail page (since the order cannot be larger than 63). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/page-flags.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 732d13c708e7..b452fba9bc71 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -99,13 +99,15 @@ */ enum pageflags { PG_locked, /* Page is locked. Don't touch. */ + PG_writeback, /* Page is under writeback */ PG_referenced, PG_uptodate, PG_dirty, PG_lru, + PG_head, /* Must be in bit 6 */ + PG_waiters, /* Page has waiters, check its waitqueue. Must be bit #7 and in the same byte as "PG_locked" */ PG_active, PG_workingset, - PG_waiters, /* Page has waiters, check its waitqueue. Must be bit #7 and in the same byte as "PG_locked" */ PG_error, PG_slab, PG_owner_priv_1, /* Owner use. If pagecache, fs may use*/ @@ -113,8 +115,6 @@ enum pageflags { PG_reserved, PG_private, /* If pagecache, has fs-private data */ PG_private_2, /* If pagecache, has fs aux data */ - PG_writeback, /* Page is under writeback */ - PG_head, /* A head page */ PG_mappedtodisk, /* Has blocks allocated on-disk */ PG_reclaim, /* To be reclaimed asap */ PG_swapbacked, /* Page is backed by RAM/swap */ From patchwork Wed Aug 16 15:11:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13355391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD9F3C001E0 for ; Wed, 16 Aug 2023 15:13:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5C3E9280028; Wed, 16 Aug 2023 11:13:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 57308280021; Wed, 16 Aug 2023 11:13:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C68E280028; Wed, 16 Aug 2023 11:13:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2ED26280021 for ; Wed, 16 Aug 2023 11:13:36 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 01426403AB for ; Wed, 16 Aug 2023 15:13:35 +0000 (UTC) X-FDA: 81130312032.27.12F9844 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id 888B24002E for ; Wed, 16 Aug 2023 15:12:40 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DabP8WdA; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692198760; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Yb8cVNWb8z5GNmNc3FWgSac9oq6EX+gmDVSFMjhGtks=; b=ZYFY54qn3be2+R31t47CbI4ehYdP0HDSCjRZ0VY6b5J+62GZzQ+Jgo4wOsLcM3XXC9PG2T YyIBsTrH/03Dwtn86HdHwsaoa0VRI374DGjKDdbsyR82GNt4/TT8y9id0liI8DmrZG/3Mw OwvIfCFdQtHz0kuhAX/pTEjAZIkgSvU= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DabP8WdA; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692198760; a=rsa-sha256; cv=none; b=EZoERHUJqb60RsNx6Og/A91Pvh/iGZoz1YqEFEQPBh2FCdPzCMcMp7+8TxTAeglN0/qEWw L7In64bLVdj5a9sBsbavr6WJkWFlq+f4UujcIYKcfWSGYs1yDwRfFHxsRRwOdJX/hmy1Jv VEijnbQTa2Xe0jk39IhkIz350fPOqOg= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Yb8cVNWb8z5GNmNc3FWgSac9oq6EX+gmDVSFMjhGtks=; b=DabP8WdAO3X/gEmJlM460ah4RY 5isfWex7DWLji7uwdADOemzGwlsRWsIQMHC4OJs3qII+6C+pzAkii5jkI5+sNQ7Vt0hoSFez9k0PO b8LcBIQyvaFZLx5mnpFObQHTBhvEgtLZGIjirjnf3w2EcWgrNnBdOKFs3avdMP2ufTZDEFcXpOqO4 P0atvCgTqJNjlTZDDyQYR5SiAWfcPvkzjy/+fx7gH/GpxMOp4BKAltN5nqQ+e0wLPO0PMhSVmkz1a 1xc90vXMMST8+NUNZx3ql8WgFHLPTX6sPtnoQoDjsm2Ol0nlErzKVzXU0gjfhsm3h7X1kNR4e2RVx Diie69nQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qWIBu-00FL8i-Nc; Wed, 16 Aug 2023 15:12:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Jens Axboe , io-uring@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 10/13] mm: Free up a word in the first tail page Date: Wed, 16 Aug 2023 16:11:58 +0100 Message-Id: <20230816151201.3655946-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230816151201.3655946-1-willy@infradead.org> References: <20230816151201.3655946-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 888B24002E X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 6rf9yh7p7sa76zjqs9obj4cixowiri8s X-HE-Tag: 1692198760-755593 X-HE-Meta: U2FsdGVkX19xlX5j0wYtPrcmP/8SIIvslPEtYaB/nocxZm5AnDs05iKBhMR4OBwGelI4J4HJWzGj/+LORZENfNmldHFIPLAOaXprTySEtGQuW9I8iUOxWVxbV+Lan4kQ3fJOafT59PBNceHWeg1Vq7LQTcaCNMc5Bg8uNMfkMY2dxbUwf8JYc4BLu/FGyc91GxxtqF2jRmNHK8sLHmIEs06XfAXk51TgMWAbpXHxxfhtqsJ5T6xb/zyXs2XSNDSiLapHXtbDU5BJnT8gvDkSxUsevGdxC3IjiUTtYEvl0KKZz13yL3auO9DsiMSRx23leRaRtXSJMoue7KFx8SSZsb55JynD/lXrPDXcgCfcNu+AOAVQeGEsMMY599algFtoA8dzBpYEDWFjvX4tuBdIbGTY1Uryxg6RXmf/QG4D5lfXMaAjq19/v5VKVO+M+5X2cOcifrBYkoJGdKJocEtitSsiWmC+0+fVIQDV61VrjKFxH2AyAxj7otgqOKV3Ke6Jx75QwIGElwu5yf77xvaRpC3G75mco50Po6/LXXbAFmn7C2uqUvm2D3gTaBEkPWS5+Cv8fCi9sfGX4l/6HME3jxl8oC8GR1hV+OI66Sa9iC4BfQoCx1IRSXN5puFmxbEeOIkJw72VRfW6UYUUBFP3EtIue3p+qn08TdVucLeoJuadvVpM6G6obR7PhI2gUIomTdzuNcLYzEo7SYAG3fSkT+4WmB9bcibrvsgM5jgrR61fUMGrKhWD3/TDT5n3kAsx9LrXB5POArr7Ca6Y1CWCEbrMHJm9H9NNZv89h2QtQHflbz2S3JcaAlw2V/Tj9sbHqHd7vTARUajUK87pAvrP6vycaUJtqUMyiM1BhIlGvgdfGVy5O4e07NDIKr5yqzeRxoVg7uMfVhNdJUJMZEUqEXdJP5VJ5fvpa8HDpQwwa5MlOIfBUj3TD+6D8K57HptQR0AQp4rsd585ayKvfhL Heo+Pj3C bBFIJsNmCJaZtVaVF1Da8hcYDGWDk4Y02ACp1+pJW4S1cpnNdfLpW1Ogmj7f+Kffa4l3s2DpjXGdZQOXTtc5ntwgl6RtdtS5Wy8ZDtTQZw5o9g5H4GVRFPfvfK2rfKCI2hDOy1suNMY1+D5NbWOcbmRDt1jDgLNSzVyHXHkAQajza564wx4t+ZRxEWF1QG5z9nf4jfHvuoZnMR1zfxvotcpdIQDjxghWIQafmfx1h07MzxP+erxwALqkmBA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Store the folio order in the low byte of the flags word in the first tail page. This frees up the word that was being used to store the order and dtor bytes previously. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 10 +++++----- include/linux/mm_types.h | 3 +-- include/linux/page-flags.h | 7 ++++--- kernel/crash_core.c | 1 - mm/internal.h | 2 +- 5 files changed, 11 insertions(+), 12 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index cf0ae8c51d7f..85568e2b2556 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1028,7 +1028,7 @@ struct inode; * compound_order() can be called without holding a reference, which means * that niceties like page_folio() don't work. These callers should be * prepared to handle wild return values. For example, PG_head may be - * set before _folio_order is initialised, or this may be a tail page. + * set before the order is initialised, or this may be a tail page. * See compaction.c for some good examples. */ static inline unsigned int compound_order(struct page *page) @@ -1037,7 +1037,7 @@ static inline unsigned int compound_order(struct page *page) if (!test_bit(PG_head, &folio->flags)) return 0; - return folio->_folio_order; + return folio->_flags_1 & 0xff; } /** @@ -1053,7 +1053,7 @@ static inline unsigned int folio_order(struct folio *folio) { if (!folio_test_large(folio)) return 0; - return folio->_folio_order; + return folio->_flags_1 & 0xff; } #include @@ -2025,7 +2025,7 @@ static inline long folio_nr_pages(struct folio *folio) #ifdef CONFIG_64BIT return folio->_folio_nr_pages; #else - return 1L << folio->_folio_order; + return 1L << (folio->_flags_1 & 0xff); #endif } @@ -2043,7 +2043,7 @@ static inline unsigned long compound_nr(struct page *page) #ifdef CONFIG_64BIT return folio->_folio_nr_pages; #else - return 1L << folio->_folio_order; + return 1L << (folio->_flags_1 & 0xff); #endif } diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index d45a2b8041e0..659c7b84726c 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -282,7 +282,6 @@ static inline struct page *encoded_page_ptr(struct encoded_page *page) * @_refcount: Do not access this member directly. Use folio_ref_count() * to find how many references there are to this folio. * @memcg_data: Memory Control Group data. - * @_folio_order: Do not use directly, call folio_order(). * @_entire_mapcount: Do not use directly, call folio_entire_mapcount(). * @_nr_pages_mapped: Do not use directly, call folio_mapcount(). * @_pincount: Do not use directly, call folio_maybe_dma_pinned(). @@ -334,8 +333,8 @@ struct folio { struct { unsigned long _flags_1; unsigned long _head_1; + unsigned long _folio_avail; /* public: */ - unsigned char _folio_order; atomic_t _entire_mapcount; atomic_t _nr_pages_mapped; atomic_t _pincount; diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index b452fba9bc71..5b466e619f71 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -184,7 +184,8 @@ enum pageflags { /* * Flags only valid for compound pages. Stored in first tail page's - * flags word. + * flags word. Cannot use the first 8 flags or any flag marked as + * PF_ANY. */ /* At least one page in this folio has the hwpoison flag set */ @@ -1065,8 +1066,8 @@ static __always_inline void __ClearPageAnonExclusive(struct page *page) * the CHECK_AT_FREE flags above, so need to be cleared. */ #define PAGE_FLAGS_SECOND \ - (1UL << PG_has_hwpoisoned | 1UL << PG_hugetlb | \ - 1UL << PG_large_rmappable) + (0xffUL /* order */ | 1UL << PG_has_hwpoisoned | \ + 1UL << PG_hugetlb | 1UL << PG_large_rmappable) #define PAGE_FLAGS_PRIVATE \ (1UL << PG_private | 1UL << PG_private_2) diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 934dd86e19f5..693445e1f7f6 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -455,7 +455,6 @@ static int __init crash_save_vmcoreinfo_init(void) VMCOREINFO_OFFSET(page, lru); VMCOREINFO_OFFSET(page, _mapcount); VMCOREINFO_OFFSET(page, private); - VMCOREINFO_OFFSET(folio, _folio_order); VMCOREINFO_OFFSET(page, compound_head); VMCOREINFO_OFFSET(pglist_data, node_zones); VMCOREINFO_OFFSET(pglist_data, nr_zones); diff --git a/mm/internal.h b/mm/internal.h index 9dc7629ffbc9..5c777b6779fa 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -407,7 +407,7 @@ static inline void folio_set_order(struct folio *folio, unsigned int order) if (WARN_ON_ONCE(!order || !folio_test_large(folio))) return; - folio->_folio_order = order; + folio->_flags_1 = (folio->_flags_1 & ~0xffUL) | order; #ifdef CONFIG_64BIT folio->_folio_nr_pages = 1U << order; #endif From patchwork Wed Aug 16 15:11:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13355398 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7BC9C001E0 for ; Wed, 16 Aug 2023 15:15:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7980C280027; Wed, 16 Aug 2023 11:15:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 747FD280021; Wed, 16 Aug 2023 11:15:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6387F280027; Wed, 16 Aug 2023 11:15:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 54914280021 for ; Wed, 16 Aug 2023 11:15:01 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id BC39380EB6 for ; Wed, 16 Aug 2023 15:15:00 +0000 (UTC) X-FDA: 81130315560.19.89796DF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 7021E401D7 for ; Wed, 16 Aug 2023 15:12:14 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=JPOMDJLW; dmarc=none; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692198735; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uUh5RzdeUjxu8QH/NZha9zu8/cTsRUc2q74yTYqB5GQ=; b=leoL/h0Dd6L9aljG1VgVof7qGQWSa4e6p0fRUv5oeWmqujbs3Ffm1dXb+7nFgqWgtXK6dQ UXGLEkN4I0txn51VsFcs3wbVMQmncN4u7Z/YH1U14iVoupAoP9NGx/mq6g3Vy/zSkgw/w6 aCqy5hoGf+IWj8nB1J6k5rMz2cwV9Ls= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=JPOMDJLW; dmarc=none; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692198735; a=rsa-sha256; cv=none; b=1sOe1HKvlY6IU9uNKg3rPsrgZsQklII3Iqu2udurZ0k1lUl/OfsKYMb7lXsDoQ/SVYfAIz K2r2tLRoAixInChywAKbIyQ5MuF8FySaiRC2uXMDyMjCO3MLF/n4v/4VTFwwK6TU65XD2W flBjTB1SdyEnTHGogAU/shOE+sBHfCo= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=uUh5RzdeUjxu8QH/NZha9zu8/cTsRUc2q74yTYqB5GQ=; b=JPOMDJLWRR9qz7iWGg+vxMn6xJ YypX9IaF7FCfgzt54snUSOb0G0mYNTkZxZR1mHEDpsnoro/ulTpsXRYroXgEVz45EudlB/HYvkkL9 cYcalW2OLpHrCKILPx8d82zqIUSPE+eG3iJIqRIjWtzbYcfznWX5/XfJJLABj8G5jertDB7o/D7Al nyqt61Lqj5g06bI8l/Idf5H3U0lwPR9QvWC2uOhPxtcRfdADLPTVqa6JB72h5mXNKbAfA71gNR0QP y2Ab2a+In1ZiDl6nYWWoWo/A9hQt7mq6b/mnzVNetlu1+K0IdP28y8hTprxQFfm8VeESjFHju3T4y zktds10A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qWIBu-00FL8r-QS; Wed, 16 Aug 2023 15:12:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Jens Axboe , io-uring@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 11/13] mm: Remove folio_test_transhuge() Date: Wed, 16 Aug 2023 16:11:59 +0100 Message-Id: <20230816151201.3655946-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230816151201.3655946-1-willy@infradead.org> References: <20230816151201.3655946-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 7021E401D7 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: b9qyi6iex8a3ckg9qsfjg3ttijfesdxx X-HE-Tag: 1692198734-10212 X-HE-Meta: U2FsdGVkX1/612G8ShQVqJEfaLUxXyeHnATJu3C6+12NWFwZ9SlFYrK+y15BbSSwwnYcVcXY2AGJlN1mTJE3s6r7sYlTGJS5ovcfqwo/cF2clVuZzrGo0OdINxQUUizILQtL2BT+ASzVoXRCg9hR/+PWCGlKwvgEEHARTAgJ+pCLrtoMXN960rtAIQwz2jchSEewiGX/2wUeU5YyG986M8x8i6YFg4Y6p79if7fnCQiHHnyjaOryeljQQ5TRG6X+LkndGlI8HmEf/sH4EGNiV8TV7Ls2RECLvue8H7lUV7EAMsBU0GUNCaqyrQFEKBSJomSqWI/YzfJW8pSrXBn+u6/C2OYa/kAGQz59Sd9934G1C4cumJ8N0joPo/xyPm04jcnYw558nWefPP4ymL8tXQVILYpE02MJAoXp269AxVc+NYlVk7Xu7SarCgX8XBnITSsExRlY+89mDNRxkm4aru7DR1XFrRMkneglAznVFtohFqkTNZyh6xrRjH8h0VUc+KAKKuKB4A4BmQEDmPR1oXvmkP9yYWYOUxUcfqNyD4q7PCa3e+0Mexxw6LHXk6KbGEJHAlk4lspEH2rnbqB6ZjIPmZfQj9VgJVtUQ4dPuW1NFotPf690YBGf1ikVXCWge2cEFyBo+9ImwULStq81OTuKk64DD3soe4zFy9fMcStCKSkUwI647keDT6uH12eahC4pYB+qajU1W7CzdzAyLJ8sch2nVcFufXhxpFUofidGw+H3EVVr9ZPjBtwy/wd2PTRr3+gmwZvj7Zi7V0UgUMdQPGHx2yMsk2iQj3HHjo15xVhye/moSZzflfPIHlf9f4El3Q3t6vFaSnaj8GLGb4+jzSqYj+5ci2pqDoh59BPHPxS0biydZUjbf5I49LB5GT10Y6yH6VjrVs58s3RPbLytOqwk056LrC6YI6dCQ3pKvV90/t8SoOFfe4V90CC8U/53pZWWYkTa/YCRWJR oo9XOJaB 941C7IyJWxhrn+JQGbKEun+2uzLDOBy2CBhUViEe4LchB5fIrscG5tXiP9dbnSKUVeEXBW4cw1AxBmNOCV/qMGyuH1sB8GDml/HFUI+iYZhKxeS53rW++DpsuAhvYv9ibsndh6bdq3bNzhDvOcdSDkyDq5Xg2BL4R7xC1TdZ+RHePv+mhDYnJe5zw1XkK+e1cNuP6cG0powREvWRw4Mdu5KOaDMoEWPqEbCom X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function is misleading; people think it means "Is this a THP", when all it actually does is check whether this is a large folio. Remove it; the one remaining user should have been checking to see whether the folio is PMD sized or not. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/page-flags.h | 5 ----- mm/memcontrol.c | 2 +- 2 files changed, 1 insertion(+), 6 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 5b466e619f71..e3ca17e95bbf 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -853,11 +853,6 @@ static inline int PageTransHuge(struct page *page) return PageHead(page); } -static inline bool folio_test_transhuge(struct folio *folio) -{ - return folio_test_head(folio); -} - /* * PageTransCompound returns true for both transparent huge pages * and hugetlbfs pages, so it should only be called when it's known diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 35d7e66ab032..67bda1ceedbe 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5765,7 +5765,7 @@ static int mem_cgroup_move_account(struct page *page, if (folio_mapped(folio)) { __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages); __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages); - if (folio_test_transhuge(folio)) { + if (folio_test_pmd_mappable(folio)) { __mod_lruvec_state(from_vec, NR_ANON_THPS, -nr_pages); __mod_lruvec_state(to_vec, NR_ANON_THPS, From patchwork Wed Aug 16 15:12:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13355390 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF80EC04FE2 for ; Wed, 16 Aug 2023 15:13:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 59B2D280027; Wed, 16 Aug 2023 11:13:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 54C30280021; Wed, 16 Aug 2023 11:13:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 41376280027; Wed, 16 Aug 2023 11:13:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2FC2F280021 for ; Wed, 16 Aug 2023 11:13:35 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id EDB1A1C929B for ; Wed, 16 Aug 2023 15:13:34 +0000 (UTC) X-FDA: 81130311948.18.59C8058 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 0117B1800A6 for ; Wed, 16 Aug 2023 15:12:13 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=eMVhuX7h; dmarc=none; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692198734; a=rsa-sha256; cv=none; b=FkcdWyQOBLJuC8AevWv9IX5Sdbu0vVez1mXeASuHtz3exkel9w9yk2ybW0og4Ey8+MX2fc sDcLnEs1j84Wf8I9Ok+gqJnWaFeTxL1CgEo8E2tPRQF/LRDM+g1RIep8gy4LKpd5CP75HQ t+V+1HFgw8UYbPzuzvwJwJ+oTTfOsk4= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=eMVhuX7h; dmarc=none; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692198734; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BwcfAds/xhqZWiqRvjYyKhMdZCr8x2s3ljtlneKBAzo=; b=OVVaw9ucIedGnsdA3t8U6iT1r/jwcenJ9ilKmITT+WrnrCxRMbFg48WMmti+lOUJLNJC7b zjlDB56qeJ3PJal4Ybllwng2uIKniQy1leXSFPncohGGsPNwDhI7KOw3Z+cD3PkHfz7N89 mq6iM1gmbcJ9PJJ8lxweYL0rlCEIwcA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=BwcfAds/xhqZWiqRvjYyKhMdZCr8x2s3ljtlneKBAzo=; b=eMVhuX7hG9c7mw95dVUC5g5w4t hY8TCVDL86DD/XbzNDAA4HQBlF/etzuazHCgmR7BNO+kWirnhhiSbT3bCKya7lLe6DZpzkuFdhkM6 +7HRmMgUwUaiFX3yDvVdVFil0NVo/CYS2jUu2YP5lJ4+dTilOgLgpOgmGyptMGHz5caPfvauvbX+R aFbVRLbczS/xXGM6RV3PCLjq0i+S7tEGZmj941zqJ2dgzErrqNyMKHGvhwNexx6b6hP3HBOI14FP5 6j5fnndwE7Y1tn0OBzq8AMmXOee1K1U9CDHkTxYSNy7njUxf0o/RU3ySF7d88ogszL/qnvCtod59V FhKohhCA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qWIBu-00FL90-UO; Wed, 16 Aug 2023 15:12:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Jens Axboe , io-uring@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 12/13] mm: Add tail private fields to struct folio Date: Wed, 16 Aug 2023 16:12:00 +0100 Message-Id: <20230816151201.3655946-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230816151201.3655946-1-willy@infradead.org> References: <20230816151201.3655946-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 0117B1800A6 X-Stat-Signature: 61jjs7ztihcnqbabbjo8kx5fds6qxqdb X-HE-Tag: 1692198733-335035 X-HE-Meta: U2FsdGVkX19gcTHI0xkAnO8DKoHKwan6pz0fJG1O6R7ewngjuIuezfxG6EMYLd2+5dGSFPT/CCQl4FuuP5zdJT8b44PPkkTrspX6TKbfO9mYxqPrI55++uZce3+yv2qlS5Ff0MSLxEinv7MgD1iBC0hihnGA9T0ztxiYswHy54n9WxRJGOawQFbPMh0QND2YKcqtYUsQgy1NXRxtNdunT2IwNCNaECbTUnkvSFRGm6sEAgQ6IVLOOGZj7CxA/bK40wZCRvFFHQzISfHgffR4CDYYBHXMWAObfP6IqRlI4LiWM6BMYU/gmbidkwQyf79/l+W/n5ZmkZQo9PdeLgJZNOvIJQE5ZjEcKkzE7WeCgjzRpddR0bx+0knjxJ4L3Oe/DppJNOaws7qDsaLu4lOlYm+hEl5go7I6O7TA4XyD46wR7gJtlD7lxuolfDZyTDWZpPD/0JuX6McDc633oWx8O7amBXK8pqLyYsUNGKEG+sGmsELCr3b3AYkJugpZ2rwFbSKajSekxSxK72BbR6CD/r/nKKxzOXbYy0255fkwuOlJO849+wIuhHkYPSBJwlBPGaWjt2Pe0n2e5lI6MqG12UfiyQ0hzJEx5/qeBjwZ8Udsca7/2d/aaLVTO+2ZE/3/NRAl9Z4+OXPPy47NIcqA+x0TRq9dTekTf/SVIof9Po5rC/UXSJxZeTrDIRk3tXPivFKNMBfsUmrtZ3JkdqYJm2RLSK4e1Saf2uBTg4EZPhPFQ/cykXp6INv3it6nrJq+r3wzqXBqVADB/FI4iQ2IryBl6Rl337slpRzz5UrwrUBSnYJnbj7BEt2Jg8KD3leowj0CP5zSblMQPDjz1tT18bwjl7E8wyE797LRricsOB7rxTuB2vbk94spDwQ4+5Doav6vc1loVeY3a9z9bBWMCJeQcrRpueyEDyi4xr/YnbLTkwVZx9y97TJofBeUTd5WIXiko7GY/rPdjQSlEMP oghgpiXA ul4b4+3S7CDFYPwzsQjuym6ET4oYthBkepCfPKGfyZ4gGMh7dZGCZkiylSLE5qCef+wN3aOTfYAxG9XH9gMhJXKuQgjEtBDe96zx/XeBDSZYpCQQqMcHPmaM8dwnqDk2UZK5YrgZS4S1/Hw7eJyHPde+1T+d6kwNVuz30+U6XjvB6xTtsc4ZlKtwUUg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Because THP_SWAP uses page->private for each page, we must not use the space which overlaps that field for anything which would conflict with that. We avoid the conflict on 32-bit systems by disallowing THP_SWAP on 32-bit. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm_types.h | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 659c7b84726c..3880b3f2e321 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -340,8 +340,11 @@ struct folio { atomic_t _pincount; #ifdef CONFIG_64BIT unsigned int _folio_nr_pages; -#endif + /* 4 byte gap here */ /* private: the union with struct page is transitional */ + /* Fix THP_SWAP to not use tail->private */ + unsigned long _private_1; +#endif }; struct page __page_1; }; @@ -362,6 +365,9 @@ struct folio { /* public: */ struct list_head _deferred_list; /* private: the union with struct page is transitional */ + unsigned long _avail_2a; + /* Fix THP_SWAP to not use tail->private */ + unsigned long _private_2a; }; struct page __page_2; }; @@ -386,12 +392,18 @@ FOLIO_MATCH(memcg_data, memcg_data); offsetof(struct page, pg) + sizeof(struct page)) FOLIO_MATCH(flags, _flags_1); FOLIO_MATCH(compound_head, _head_1); +#ifdef CONFIG_64BIT +FOLIO_MATCH(private, _private_1); +#endif #undef FOLIO_MATCH #define FOLIO_MATCH(pg, fl) \ static_assert(offsetof(struct folio, fl) == \ offsetof(struct page, pg) + 2 * sizeof(struct page)) FOLIO_MATCH(flags, _flags_2); FOLIO_MATCH(compound_head, _head_2); +FOLIO_MATCH(flags, _flags_2a); +FOLIO_MATCH(compound_head, _head_2a); +FOLIO_MATCH(private, _private_2a); #undef FOLIO_MATCH /* From patchwork Wed Aug 16 15:12:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13355385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8656C04A6A for ; Wed, 16 Aug 2023 15:12:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2DBFD8D0001; Wed, 16 Aug 2023 11:12:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 28C04280021; Wed, 16 Aug 2023 11:12:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17CA88D0047; Wed, 16 Aug 2023 11:12:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 08F4A8D0001 for ; Wed, 16 Aug 2023 11:12:53 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C39C0140DE0 for ; Wed, 16 Aug 2023 15:12:52 +0000 (UTC) X-FDA: 81130310184.29.DDDE6CB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id D26FB4022A for ; Wed, 16 Aug 2023 15:12:14 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WlknXyZu; dmarc=none; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692198735; a=rsa-sha256; cv=none; b=jj8g+YfGtaUMGtSzpHqRitXwiFY6XsZg5D7JrQDZB82loE4hzJAewM2BELmrM0eCj7IftE xgpsEk1DA9OD2GKxmbuOAQdks4pw2mcquzJ+uj1z+ZZp/GRF1BZpTJ6Zbdf3o1dPmcih/P Cxl3DqlzQCfh2mukN0ICfYpbOXo9zlE= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WlknXyZu; dmarc=none; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692198735; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0vtiyxjxIiaXbIAeymWG2xctwrMSMhwhXh76m1ko6N4=; b=S8mMP+ooB/C0/KS+gnK27v1SR+GvuOd5GZE6bJGZK8Rktj5WNlHzk04igWSm/txNKk0L5U wg3gZ0CTxn1ztjzGoUsB6qnpc/tahoh14CJpWsjYEzZ8aWSHWUspxgMxa1DqwJWUlaKaa/ sandiLCEj/sUZw1fOmrG+aEIfNC+NHE= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=0vtiyxjxIiaXbIAeymWG2xctwrMSMhwhXh76m1ko6N4=; b=WlknXyZuozWr/mDoG17T9epOii ow7iLBULSWgdtKs8v7Xqw+ORWhqJ0TVudIUNaiSUnJv0Le1gaUH4UbGA/egB13UsHJWAd+q3A4+qj vfScYM+JAWQKn32132LyvsY+nJnvK3H1IHLzxrxPnQFXx9hdfiq0wvorG0K4efZcBmD4gLVaxkrrV zzsjJi1rNKgkFoUi8NdO8FL4R5fp2WVf/vS9zNlRplQqX7HRVNMIZ1mzOGXJluJrSfatef2H99k9w nkW0bJujqUFiIrBZW2OBrJRLcN1d7FxXyOeGPDoIbUjp/anNBCDqdizH/GuVm0VUlVzQ4a943byHm oj9MbNvg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qWIBv-00FL96-2R; Wed, 16 Aug 2023 15:12:11 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Jens Axboe , io-uring@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 13/13] mm: Convert split_huge_pages_pid() to use a folio Date: Wed, 16 Aug 2023 16:12:01 +0100 Message-Id: <20230816151201.3655946-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230816151201.3655946-1-willy@infradead.org> References: <20230816151201.3655946-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: D26FB4022A X-Stat-Signature: wtjus6ddg81eyfkwma5my4uuizp9o8fe X-HE-Tag: 1692198734-689677 X-HE-Meta: U2FsdGVkX1/v+zU9Khhk4HNVBTkvo0v2yVIwjwmlDKs6lWQk7zLB1SpvTVHQsiAAVYdW9PibExCrj5OplecDUC2GNcbva8hElCbhbdBD/dPZlxax9YXXb2uMMF9t/2gsljckGQNC/6il96PHZ4LS531NF552nfdFGFT6IiKWMVPVOnY7H9ehvvl2CgVBkhofSBijPTHk2siqs1BU9HiS6E7Bl0G3JF7rODpkUHEq2gk1NU7I3c9BVZEDC4T/PzsnyHD61zT/QNbjK1aIHM5di6NgFp5IQQMGcLk2zkyPX/n3gAM7EapV6hd5ebzYfeMXRW4eIPlZNStNpN1H3rzIxE2pLoCOCy57u3cGJ5ULEv0y1k+P/zq7N11n6YsUgtvE/FwABRPlr54BYtLtpLLtxf+OolwfkLKAkZ1cdw/KSgy1BQG+3MhLJIlwd8l+dJCUrIVtemQVAJKHpgvNaFI7l3fb73UQFRx5I4/ZiBX+Kvru8omxypm1OroATa/vmWwerKUbbRZlFnNQ75YhFxGh3+ca3n8NfgiKWLBPiWII7ta5zQFhLkmAkR0RLhRCraJu71x4JgcBsQBO4VQR3N9HmMBw4PXx3zIPJvTGyUZlXJ/2AjHHabYEm9Zeinb00uc+zg1H5Ael6k/7J8RbZjkRbljxLBAPLiBd0mIunNc6JIPYqFpjyod8s6gsbdBxXWl2T7Jar0rLDu0y77dtUowLTgASHyT9VbRTZ1cJ6ZEyGeIQGNc2ogarb4jdSKkU+UTAH07NrvosYMJvHXN4bOu+32ox0fy5KoDf9oKOei7+7CSogHC39oNGFrfe1TMDYyLLqst69kaBk83V163pRlAJivqXku0Ghb8Neo+RIP5aW4E9ZC49d9Uvtf0r+TTa6SjsbI1mDMOalX7a/vuPVw1m66XPmbZNLiMf6fxRC/20YVgiAOFs3d8bZkRiAIKK/WVOD1jgJAEBm7xx25l1N0H uSU5poHb 6swm/zLb+2sdj/NpEkVBDnuzhhc+QNCJZhzmKP0yxI2XMtzpcLTpdREpY9Z4VVAJE4TdeNlcJPjPbayhTeEhJIrGXPmUIpdH3gcUnYryAnze1EEXxmCk0/CzwpPgcvkCgtjeL+6ZAlEYNAKgw0NQ7l/Bnih/oFlR5xVS1xWRY6cH1FV5WN47cesTgTITT5prvKP9qgnEjyoJgShIIS5Fq+YHq9reDr3CMy/un X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replaces five calls to compound_head with one. Signed-off-by: Matthew Wilcox (Oracle) --- mm/huge_memory.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c721f7ec5b6a..4ffc78edaf26 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -584,14 +584,11 @@ void folio_prep_large_rmappable(struct folio *folio) folio_set_large_rmappable(folio); } -static inline bool is_transparent_hugepage(struct page *page) +static inline bool is_transparent_hugepage(struct folio *folio) { - struct folio *folio; - - if (!PageCompound(page)) + if (!folio_test_large(folio)) return false; - folio = page_folio(page); return is_huge_zero_page(&folio->page) || folio_test_large_rmappable(folio); } @@ -3015,6 +3012,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) { struct vm_area_struct *vma = vma_lookup(mm, addr); struct page *page; + struct folio *folio; if (!vma) break; @@ -3031,22 +3029,23 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, if (IS_ERR_OR_NULL(page)) continue; - if (!is_transparent_hugepage(page)) + folio = page_folio(page); + if (!is_transparent_hugepage(folio)) goto next; total++; - if (!can_split_folio(page_folio(page), NULL)) + if (!can_split_folio(folio, NULL)) goto next; - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto next; - if (!split_huge_page(page)) + if (!split_folio(folio)) split++; - unlock_page(page); + folio_unlock(folio); next: - put_page(page); + folio_put(folio); cond_resched(); } mmap_read_unlock(mm);