From patchwork Mon Sep 11 19:41:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13379606 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93B9CC71153 for ; Mon, 11 Sep 2023 19:50:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EC9336B02DC; Mon, 11 Sep 2023 15:50:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E78C46B02DD; Mon, 11 Sep 2023 15:50:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA3746B02DE; Mon, 11 Sep 2023 15:50:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id BCAC26B02DC for ; Mon, 11 Sep 2023 15:50:36 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 929611605AE for ; Mon, 11 Sep 2023 19:50:36 +0000 (UTC) X-FDA: 81225358872.10.80652FB Received: from mail-vk1-f170.google.com (mail-vk1-f170.google.com [209.85.221.170]) by imf05.hostedemail.com (Postfix) with ESMTP id C30DA10000F for ; Mon, 11 Sep 2023 19:50:34 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=BiVBoMzb; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf05.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.221.170 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694461834; a=rsa-sha256; cv=none; b=3yS9HclopcHnb6TZPAi4cdwMqDw+MOWVSIgo9ZPKFZy4W5n39eKZ5WZ+3eLaa+34H27jkw OFCnl4ih6uSPuAZEyKqxZo2+Gm6/+vp4v0sXRzzlu5fQ6dA7qS8BItspK+Kbodf66ca9Zt waPqdRHXju8jax8ekN6cJTBU13gkgNE= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=BiVBoMzb; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf05.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.221.170 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694461834; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7FpxEl1I3pH8RvyVkhbOxLMeix4VtK7whCGHR+Kbh8c=; b=ORb8n4DBSHPq+MOhJcWwUX6XAjmLi4wHQeCb+dGIFDu3N9FVUnlKjrkyKyKm+Kr1SOEIB7 kSnZOeDWYHxImPFkv9CJBGVwfRKfyorogdlCrid7hm+bl4e1D+jUZbigawNU6lfaP3NvkD jlm/FH6MAotWF923Sl3qapDiZ5FjXWM= Received: by mail-vk1-f170.google.com with SMTP id 71dfb90a1353d-49352207f33so1802790e0c.2 for ; Mon, 11 Sep 2023 12:50:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1694461834; x=1695066634; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7FpxEl1I3pH8RvyVkhbOxLMeix4VtK7whCGHR+Kbh8c=; b=BiVBoMzboCSIbAPzXRQQwXUaxIa6Nz25H2KoRy9AJvCC0xEckl5e5ePKqILT9AKR8r BNMH3ZDeJtSOuK6oiamnzzCHEouMtvy1RXNhODPhTTT1dwCvpwaBhADMj1wV8Vvtlsvx bsDFR/oY9kd38GrQccwH7WF1iXt+dfhT2t16MaPNaEC28uRwIkA3uv95yqY+3+4e/714 376OoOjfajiVcbaqsLRYJRM7kj/zVK1g/wKt8TorXrD8Pqyc+ni1VMQ/gjes+CuugmBj jZ+y/SsNzDfVpy0ygK6usNXycPOJg7Xc/g2DiQ5BDCzv/kaZHkJL5+cmudHVCKlbsc6U uIVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694461834; x=1695066634; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7FpxEl1I3pH8RvyVkhbOxLMeix4VtK7whCGHR+Kbh8c=; b=O7uEzliBIH/TUiNpAb5VXXzcmqLU2Gru8ejOg2rA2/1uhfGCYb9GeD5aLrGM+l5/ow V0tS5NDwe27Ir26ajl1ypYt9fo7yReL79KXqyHNA6YG5bXxX6JLNj1Szi7bpJwB7TRWg LGMPMG5IFkmd8el49UWy7olAmBw4HWveoDenRsxJRlxM1LNDajCxFkw3bZaCdtDt5ZUv 5aGigHA3uzxqNkGCPMpNjf9aM4+Kl+ftBtlGszThDDsGZWFo7cPCcfWu5NsvKTR9ts/H kmAcC0R0GGCmvVhsY1hJqWsgfpGvC1MOwGh95GWVYH8PYfqgStTU9+WjnuPmL0+QfKuZ Z4TA== X-Gm-Message-State: AOJu0Yz6XPA56t5zhij+KbAbOBcLQwcz8koDED7eYRg8ik0H4u4eZxxx CDKC7Wk7oT9RtBqpb3q1BCVftQ== X-Google-Smtp-Source: AGHT+IH5gBtYy1/4AAerHO8D5EjmzZL+ejo/P0LZt667+IenuP9gxk1guF7nx21/dGBJi/c5z4jB+Q== X-Received: by 2002:a1f:e7c4:0:b0:495:db79:ea76 with SMTP id e187-20020a1fe7c4000000b00495db79ea76mr4004344vkh.1.1694461833711; Mon, 11 Sep 2023 12:50:33 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-3012-16a2-6bc2-2937.res6.spectrum.com. [2603:7000:c01:2716:3012:16a2:6bc2:2937]) by smtp.gmail.com with ESMTPSA id o10-20020a0ccb0a000000b0063d038df3f3sm3149714qvk.52.2023.09.11.12.50.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Sep 2023 12:50:33 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Miaohe Lin , Kefeng Wang , Zi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/6] mm: page_alloc: remove pcppage migratetype caching Date: Mon, 11 Sep 2023 15:41:42 -0400 Message-ID: <20230911195023.247694-2-hannes@cmpxchg.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230911195023.247694-1-hannes@cmpxchg.org> References: <20230911195023.247694-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C30DA10000F X-Stat-Signature: zzaji11y9dazj456ucpxhjdpfykkqg7t X-HE-Tag: 1694461834-126390 X-HE-Meta: U2FsdGVkX1+ql1/bTSIdKpGun59hY7Pv1L4tSh8oz7N2sLgvMqNQUogsXV7OzFTQVascXrh1ck/4ZGXkML0CqgfkGfHRi2MVWtpq2DyhUg5bwTaFpIXxKOewj5zfhYGTUXDRnFdciC06q6UhtHZQyPMXPaoTuyCoQLDbErCf0Z9i1qNMEcLxSiqgrBNOVug3Kq93/ZUUGfRkxKSz6Txy7mCzsQpZ9DqIP2JoGSbP31gBSfUGpH9DVeza8oUTx2WBQ6iWDAPsuKFI9qAKekNyLItIddOiCfCB+8hzAe9uSPylv41Z+TRMvqWo8+o1RaIJyHJP20Cq02swLuoJtqhV6put9cQaxn00SYjiHKTamJSi4Adfn2EnHH12GZ7nA5YvKIHy0MJ4UmqhI53LM9Zq6bu2rf6cGLbnLAMVGz1YngMZzblAFPrWZFZPgS4HTstJET2+z51n/hHbM+dUKNjCECY6ruwjPwgwHwmv9uHau2IxA/L+MHliYFb2KyKNZO1rZkNlvYTO7ZT0YJ5+AOlYOVPiwczHummsbzusPBEzYztl14NwrVSRSTiZXzyL9nHi4YsRoJdo8OlMA3naF1xixTK2CUzpyLTfc1kcxWpXopRn1yStQAetFgU5f7sMq4cIFHT6H45j5mRTY64GfeMESuS5PCLOD1Ipa2A554YnG70SII53B1lThkOC+i1q760RAIKYyoXEyKY8mP1v0JwtVX152US5/iLVnO5W2t2xhRGG8kZewwB9/C+Rc2fzTG/MgTZr5/RuWZ2dNE6ozYQ36K9Y1YB5avpHz+g3+1szZ5H7fN3ddyqKcfEK3Q2mJE7fjFdYy04ZNTwPxDqQOLYCYo7PylXG+bNwdZOS1uVpYeK4nNNDLo7vwLpVwE1/d7SNMwQwiBSZTtbuWqSvRy+xWfGq1tdVcZkHSoZW7L+zcAltKxEVnDDI1dXqpFd6q9BnPtPybzKJbwsGHc0ihG1 ZgfcH2VH yOZZ8zQ7CCDmb4c9at0r0oToHg2ZX89nKpSqn4gSK/ywNpP2TdX9oQrnuaiEjnBUZ4tIrmKmsCJbLdJNZUcys5VZAKHE/t2YRRNnJQzMncGnfxg7CEcmygtpaUWCnSdbu/zzcte5/G+X98KgHvNCl4X4hzdpWQSPp3vPwATH3A7eujysBAqZs1I7dP+C1CMJVhfYvaHEF79Mx6CWC8aEVCdQHMeWMoXCyWhbrSl+tGBiruDesQIG8Zczny08828V3VeyHiR0T9aizoh9X1doLm0Pjx4m8mIV5+E4ogBUbXi60iv3Ayd3qqvGCtTOD96lO4huon7PPLdchGFaKHEVgKP6TUA13ERR1z0LbqppgCsf3IMCkTzoxHnVleTM7UMR3Z1jAsApp48qbYMaBkT+h40Guh1Y3TfQbZoO0rBFfeG+kvjX3ylY0VP5myVBczc+PSRyCfc047n0SY4f0NWbATiEv8HCGDseamebA02kWXt34duc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The idea behind the cache is to save get_pageblock_migratetype() lookups during bulk freeing. A microbenchmark suggests this isn't helping, though. The pcp migratetype can get stale, which means that bulk freeing has an extra branch to check if the pageblock was isolated while on the pcp. While the variance overlaps, the cache write and the branch seem to make this a net negative. The following test allocates and frees batches of 10,000 pages (~3x the pcp high marks to trigger flushing): Before: 8,668.48 msec task-clock # 99.735 CPUs utilized ( +- 2.90% ) 19 context-switches # 4.341 /sec ( +- 3.24% ) 0 cpu-migrations # 0.000 /sec 17,440 page-faults # 3.984 K/sec ( +- 2.90% ) 41,758,692,473 cycles # 9.541 GHz ( +- 2.90% ) 126,201,294,231 instructions # 5.98 insn per cycle ( +- 2.90% ) 25,348,098,335 branches # 5.791 G/sec ( +- 2.90% ) 33,436,921 branch-misses # 0.26% of all branches ( +- 2.90% ) 0.0869148 +- 0.0000302 seconds time elapsed ( +- 0.03% ) After: 8,444.81 msec task-clock # 99.726 CPUs utilized ( +- 2.90% ) 22 context-switches # 5.160 /sec ( +- 3.23% ) 0 cpu-migrations # 0.000 /sec 17,443 page-faults # 4.091 K/sec ( +- 2.90% ) 40,616,738,355 cycles # 9.527 GHz ( +- 2.90% ) 126,383,351,792 instructions # 6.16 insn per cycle ( +- 2.90% ) 25,224,985,153 branches # 5.917 G/sec ( +- 2.90% ) 32,236,793 branch-misses # 0.25% of all branches ( +- 2.90% ) 0.0846799 +- 0.0000412 seconds time elapsed ( +- 0.05% ) A side effect is that this also ensures that pages whose pageblock gets stolen while on the pcplist end up on the right freelist and we don't perform potentially type-incompatible buddy merges (or skip merges when we shouldn't), whis is likely beneficial to long-term fragmentation management, although the effects would be harder to measure. Settle for simpler and faster code as justification here. Signed-off-by: Johannes Weiner Acked-by: Zi Yan Reviewed-by: Vlastimil Babka Signed-off-by: Johannes Weiner Reviewed-by: Vlastimil Babka Acked-by: Mel Gorman Signed-off-by: Johannes Weiner --- mm/page_alloc.c | 61 ++++++++++++------------------------------------- 1 file changed, 14 insertions(+), 47 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 95546f376302..e3f1c777feed 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -204,24 +204,6 @@ EXPORT_SYMBOL(node_states); gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; -/* - * A cached value of the page's pageblock's migratetype, used when the page is - * put on a pcplist. Used to avoid the pageblock migratetype lookup when - * freeing from pcplists in most cases, at the cost of possibly becoming stale. - * Also the migratetype set in the page does not necessarily match the pcplist - * index, e.g. page might have MIGRATE_CMA set but be on a pcplist with any - * other index - this ensures that it will be put on the correct CMA freelist. - */ -static inline int get_pcppage_migratetype(struct page *page) -{ - return page->index; -} - -static inline void set_pcppage_migratetype(struct page *page, int migratetype) -{ - page->index = migratetype; -} - #ifdef CONFIG_HUGETLB_PAGE_SIZE_VARIABLE unsigned int pageblock_order __read_mostly; #endif @@ -1186,7 +1168,6 @@ static void free_pcppages_bulk(struct zone *zone, int count, { unsigned long flags; unsigned int order; - bool isolated_pageblocks; struct page *page; /* @@ -1199,7 +1180,6 @@ static void free_pcppages_bulk(struct zone *zone, int count, pindex = pindex - 1; spin_lock_irqsave(&zone->lock, flags); - isolated_pageblocks = has_isolate_pageblock(zone); while (count > 0) { struct list_head *list; @@ -1215,10 +1195,12 @@ static void free_pcppages_bulk(struct zone *zone, int count, order = pindex_to_order(pindex); nr_pages = 1 << order; do { + unsigned long pfn; int mt; page = list_last_entry(list, struct page, pcp_list); - mt = get_pcppage_migratetype(page); + pfn = page_to_pfn(page); + mt = get_pfnblock_migratetype(page, pfn); /* must delete to avoid corrupting pcp list */ list_del(&page->pcp_list); @@ -1227,11 +1209,8 @@ static void free_pcppages_bulk(struct zone *zone, int count, /* MIGRATE_ISOLATE page should not go to pcplists */ VM_BUG_ON_PAGE(is_migrate_isolate(mt), page); - /* Pageblock could have been isolated meanwhile */ - if (unlikely(isolated_pageblocks)) - mt = get_pageblock_migratetype(page); - __free_one_page(page, page_to_pfn(page), zone, order, mt, FPI_NONE); + __free_one_page(page, pfn, zone, order, mt, FPI_NONE); trace_mm_page_pcpu_drain(page, order, mt); } while (count > 0 && !list_empty(list)); } @@ -1577,7 +1556,6 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, continue; del_page_from_free_list(page, zone, current_order); expand(zone, page, order, current_order, migratetype); - set_pcppage_migratetype(page, migratetype); trace_mm_page_alloc_zone_locked(page, order, migratetype, pcp_allowed_order(order) && migratetype < MIGRATE_PCPTYPES); @@ -2145,7 +2123,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, * pages are ordered properly. */ list_add_tail(&page->pcp_list, list); - if (is_migrate_cma(get_pcppage_migratetype(page))) + if (is_migrate_cma(get_pageblock_migratetype(page))) __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, -(1 << order)); } @@ -2304,19 +2282,6 @@ void drain_all_pages(struct zone *zone) __drain_all_pages(zone, false); } -static bool free_unref_page_prepare(struct page *page, unsigned long pfn, - unsigned int order) -{ - int migratetype; - - if (!free_pages_prepare(page, order, FPI_NONE)) - return false; - - migratetype = get_pfnblock_migratetype(page, pfn); - set_pcppage_migratetype(page, migratetype); - return true; -} - static int nr_pcp_free(struct per_cpu_pages *pcp, int high, bool free_high) { int min_nr_free, max_nr_free; @@ -2402,7 +2367,7 @@ void free_unref_page(struct page *page, unsigned int order) unsigned long pfn = page_to_pfn(page); int migratetype, pcpmigratetype; - if (!free_unref_page_prepare(page, pfn, order)) + if (!free_pages_prepare(page, order, FPI_NONE)) return; /* @@ -2412,7 +2377,7 @@ void free_unref_page(struct page *page, unsigned int order) * get those areas back if necessary. Otherwise, we may have to free * excessively into the page allocator */ - migratetype = pcpmigratetype = get_pcppage_migratetype(page); + migratetype = pcpmigratetype = get_pfnblock_migratetype(page, pfn); if (unlikely(migratetype >= MIGRATE_PCPTYPES)) { if (unlikely(is_migrate_isolate(migratetype))) { free_one_page(page_zone(page), page, pfn, order, migratetype, FPI_NONE); @@ -2448,7 +2413,8 @@ void free_unref_page_list(struct list_head *list) /* Prepare pages for freeing */ list_for_each_entry_safe(page, next, list, lru) { unsigned long pfn = page_to_pfn(page); - if (!free_unref_page_prepare(page, pfn, 0)) { + + if (!free_pages_prepare(page, 0, FPI_NONE)) { list_del(&page->lru); continue; } @@ -2457,7 +2423,7 @@ void free_unref_page_list(struct list_head *list) * Free isolated pages directly to the allocator, see * comment in free_unref_page. */ - migratetype = get_pcppage_migratetype(page); + migratetype = get_pfnblock_migratetype(page, pfn); if (unlikely(is_migrate_isolate(migratetype))) { list_del(&page->lru); free_one_page(page_zone(page), page, pfn, 0, migratetype, FPI_NONE); @@ -2466,10 +2432,11 @@ void free_unref_page_list(struct list_head *list) } list_for_each_entry_safe(page, next, list, lru) { + unsigned long pfn = page_to_pfn(page); struct zone *zone = page_zone(page); list_del(&page->lru); - migratetype = get_pcppage_migratetype(page); + migratetype = get_pfnblock_migratetype(page, pfn); /* * Either different zone requiring a different pcp lock or @@ -2492,7 +2459,7 @@ void free_unref_page_list(struct list_head *list) pcp = pcp_spin_trylock(zone->per_cpu_pageset); if (unlikely(!pcp)) { pcp_trylock_finish(UP_flags); - free_one_page(zone, page, page_to_pfn(page), + free_one_page(zone, page, pfn, 0, migratetype, FPI_NONE); locked_zone = NULL; continue; @@ -2661,7 +2628,7 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, } } __mod_zone_freepage_state(zone, -(1 << order), - get_pcppage_migratetype(page)); + get_pageblock_migratetype(page)); spin_unlock_irqrestore(&zone->lock, flags); } while (check_new_pages(page, order)); From patchwork Mon Sep 11 19:41:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13379607 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E68BEEE57DF for ; Mon, 11 Sep 2023 19:50:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 26C616B02DD; Mon, 11 Sep 2023 15:50:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F7106B02DE; Mon, 11 Sep 2023 15:50:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F3BE56B02DF; Mon, 11 Sep 2023 15:50:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E60966B02DD for ; Mon, 11 Sep 2023 15:50:37 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B890BB3BA5 for ; Mon, 11 Sep 2023 19:50:37 +0000 (UTC) X-FDA: 81225358914.27.DA11875 Received: from mail-oo1-f48.google.com (mail-oo1-f48.google.com [209.85.161.48]) by imf08.hostedemail.com (Postfix) with ESMTP id EB602160018 for ; Mon, 11 Sep 2023 19:50:35 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=DquP0kvo; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf08.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.161.48 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694461836; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sIntV6SfyJxxBgJqngsAK3yqYWZ9fE7nMi1YvaBUo2Y=; b=lPVXS7aaWE1Ob5lNg4Vi9gR1PCitrw0Iwhcfq17+tzms5uxGZrN6+S2M11N9DjzagjmRHR rh3+Yjn6QoXo2vdIFDC80cCLWXSq3/AVubZ5MVVDOxd62epiLxIvoQHQrxcXim5stA794/ txC2YKXKS5++j0dcuc6wlyWVha6g4TY= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=DquP0kvo; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf08.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.161.48 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694461836; a=rsa-sha256; cv=none; b=As3JxekurEjgOo5iu5K4RQ6FkBokSh4/y4huigMZmgn4AyGdLd22kRBW5m+Ka9ICXcvO33 XXsMAllKLhr5KV316RhcDBs4v2wCP33aTVz1Y5YiQv2CY6V6CBzNvoTbHpVDKhdSbuf3Oz as+eqWWgsfggA2588LT+ZgmmCBatJ7E= Received: by mail-oo1-f48.google.com with SMTP id 006d021491bc7-5733bcf6eb6so2713619eaf.0 for ; Mon, 11 Sep 2023 12:50:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1694461835; x=1695066635; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sIntV6SfyJxxBgJqngsAK3yqYWZ9fE7nMi1YvaBUo2Y=; b=DquP0kvokh2rP06FkTSMOlfmpMAm58upc8aK1izs2JOxsWTALmzssSm4iiF6pVhaq8 bABGZy1LNRS4fMDSuETB7Jsp3Kka/q4eJ2MkoNXFUlvQYwK8haZjBKbPspsEmS7pEVhv lUkX/wIyUf0yPXZAlCNWthillYrShDoU1hmN4J+ZKFEeFyarezOf4V8dnhbeiggq2fTs qq4uAJ044omxDmaNkoQr5AC3LAtJXmYJAMPWaK2T+xg9jmh5CeHhSv6wwjVG2yXJwzG7 PKdtSwoqZOMufq8a+u1pviIGV56p5+gaYRSGC+4nPoCiZz3JzBrhHcSt0tFo7rihQsWz RtCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694461835; x=1695066635; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sIntV6SfyJxxBgJqngsAK3yqYWZ9fE7nMi1YvaBUo2Y=; b=idmgUOzNbyjmHl+7Z+ziuxdKrVj4ZQVn9fyNPGY4bq+QShlRVRyz+Qe+xi4bqeqnGj L/+HIkPVSqCihXQKj881ew698VKzJs61MWTcBwNawJsR3wcQLjobfIVFyi7+rKHNKJd6 qYjGBIpsjpYVnZWw7rQUEitEX2NWly2+lCUaRjuekYHWgZeAL3cydPZGN7fF8J1tylnW ZUNPcXO7FrrdRV+1PMaZI3v1lXEfX/LLUuPKxCk/2uQzhZOhAuNxz7R03AysYlEsQdxH l+S2s8JjCszm9kaiUj7pTUZjyHG75ul3v6aROsn0WBsOlG8wnQiFj8f1pUzRf14hDST6 0lxQ== X-Gm-Message-State: AOJu0Yw65ffXi7g4xZa8ezdsF/jqjW0BTmWTaNRegtgbi9o9Ycqbnd0h xhQ//igdAN8pY4pcNW4FdEMKPA== X-Google-Smtp-Source: AGHT+IGS06A0iPrm3hhJt1coz/NmHegf9pi5FM+C3kEjocCFV6oSc5waC969xFTQcEM5YgwdIe4bDQ== X-Received: by 2002:a05:6358:6f92:b0:130:faea:a81f with SMTP id s18-20020a0563586f9200b00130faeaa81fmr5835243rwn.28.1694461834812; Mon, 11 Sep 2023 12:50:34 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-3012-16a2-6bc2-2937.res6.spectrum.com. [2603:7000:c01:2716:3012:16a2:6bc2:2937]) by smtp.gmail.com with ESMTPSA id j20-20020a0cf514000000b0064b502fdeecsm3129162qvm.68.2023.09.11.12.50.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Sep 2023 12:50:34 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Miaohe Lin , Kefeng Wang , Zi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/6] mm: page_alloc: fix up block types when merging compatible blocks Date: Mon, 11 Sep 2023 15:41:43 -0400 Message-ID: <20230911195023.247694-3-hannes@cmpxchg.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230911195023.247694-1-hannes@cmpxchg.org> References: <20230911195023.247694-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: EB602160018 X-Stat-Signature: y7uodkayfqt917xy61u9ajdsa58rghuc X-Rspam-User: X-HE-Tag: 1694461835-419931 X-HE-Meta: U2FsdGVkX1/z05b6CLSZtxgs6FnaDBNZSl7NwA7jVh7wuQqIjOohCXP5b34/5Rihfv+BTBV1AZgKuqAs0K6mGGUfKvD2XFv1vZcckFyOi3wCRZRgS25GTU2isO98t6Q8WHJeT9qDCaKB5IilqgIGvHFoVnqcE9C/P9ialumAA5DzHKXOdJtOhTkRS0immfuWfC5QDuMhrpzIL0ReoCp17/xxLsFdRLLEeZamFLeSc9nkbCoecJ8dv1/1Up5fXVSW1EtG9UPEURyQx2/JyZTDqlZB3/3tjSAr01z54M2RP1SJAZxDLHDQn2pP4TB8+RbQhQkJrymI+r5cK/lldpvIZAyyPALHX5kraJJNk/MpGhoCt1Dd4bbKkQ5c7q/Vi+L2nWqi83mEaxAh/XCJzphYpmlV6cbd1rtsbagrK8v2o7oPpVoXCCaGpccNopwyAd72kpeAhPDvGa+MxHW+4BWhiQfKC50BnfYU0irREe1k/S6O5wbkWKPD6v6YyZR0e2QOV98EG5mwBEJnuDykW5NaxBfBAcKIFABSHDDF6e1x4+QUVQa1l7KwDPLuoZ8M5p64THeFG3ZypVtH0si2z2x1noNKQQUEW3N24n79d6VvkttkSjPgoU/1WOH2WYjn4eHLecrgB09S6SuYhUzssOluURxaHdBqJYoYFfH1Q8qppLexQeRRZ8PYO37HgT9uvDJqZPzKNeKZv8YMn7tK07p82wgCEhWwGi2vnl10RDa1/F6yuZLb6VwE5LIPDWr66aMOfxmYdWSq1rSB/9K6sKZVuZ1l8soNnyQ94WO8/Ia3HdhlrRUo5x9hYSox6Mxp/jl6dao9M1Njpi9uTZLhm9GtU1EgDoHWDDZvb8OlUZ4gNCJeUhhdA+O43I0cxhcAgXr53erJswFCc1U8VW5Tbkud4PoBWTaPcfcmrgCjhIbepEXlcszLynH2skoAEkt+XKDla8HRixj4FkNAMJwNmF2 2TKGjO67 PleQi8my7bAYWdp6DrWmdpvPlAE4mv5s8uijR9odDYG7cj3JVAgPzw4AS0JP11xK5lExuv1qBOuuRaTQNk6arssuYxIvscNm0r32Gx3gvIFBcCgINd72WpfGGuw4rQWGwI/MN0YzxNGvCgOnor1Obj7Nn0Y6FaaPRPL2QW/7KpMfb1PwpTpi/0eLL8ZBGecoVaY/wSYlFcJ1fDcxX7K4KHEFwTmfoDUdeT+1khO2QMEMXuyGuHUnfAbeU9HWfsA0APHcVzdp3I+p2TZngH7RMQWGq2XjuuWQ76cBlNjpBaQjlu5RLUUVhmrOqTzHa4LhUiglntp1Uy4LvkX8LMJkPEeBhFTLsPfdSWxQ4LRijwM/SoBYX4pB0psA36+XXSqIJpd8pbl8mhSzIi+OtHze3IasrsVidGi+Mn7P+mDqP4OwPfZQI0RB7bv1kmvieAu+zNOfEbq2GrWh5/IZ8HAsht+zG2KoE1mCcctBoXbmanN33IWk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The buddy allocator coalesces compatible blocks during freeing, but it doesn't update the types of the subblocks to match. When an allocation later breaks the chunk down again, its pieces will be put on freelists of the wrong type. This encourages incompatible page mixing (ask for one type, get another), and thus long-term fragmentation. Update the subblocks when merging a larger chunk, such that a later expand() will maintain freelist type hygiene. v2: - remove spurious change_pageblock_range() move (Zi Yan) Signed-off-by: Johannes Weiner Reviewed-by: Vlastimil Babka Acked-by: Mel Gorman --- mm/page_alloc.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e3f1c777feed..3db405414174 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -783,10 +783,17 @@ static inline void __free_one_page(struct page *page, */ int buddy_mt = get_pfnblock_migratetype(buddy, buddy_pfn); - if (migratetype != buddy_mt - && (!migratetype_is_mergeable(migratetype) || - !migratetype_is_mergeable(buddy_mt))) - goto done_merging; + if (migratetype != buddy_mt) { + if (!migratetype_is_mergeable(migratetype) || + !migratetype_is_mergeable(buddy_mt)) + goto done_merging; + /* + * Match buddy type. This ensures that + * an expand() down the line puts the + * sub-blocks on the right freelists. + */ + set_pageblock_migratetype(buddy, migratetype); + } } /* From patchwork Mon Sep 11 19:41:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13379608 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D83F1EEB580 for ; Mon, 11 Sep 2023 19:50:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 183DE6B02DE; Mon, 11 Sep 2023 15:50:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 132896B02DF; Mon, 11 Sep 2023 15:50:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E9FB46B02E0; Mon, 11 Sep 2023 15:50:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id DBE866B02DE for ; Mon, 11 Sep 2023 15:50:38 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 76347C035E for ; Mon, 11 Sep 2023 19:50:38 +0000 (UTC) X-FDA: 81225358956.23.FB17560 Received: from mail-qk1-f176.google.com (mail-qk1-f176.google.com [209.85.222.176]) by imf02.hostedemail.com (Postfix) with ESMTP id A276380006 for ; Mon, 11 Sep 2023 19:50:36 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=D8j56+fx; spf=pass (imf02.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.176 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694461836; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DXpNvTPTT9ib/tT2NTgBNzi3cETEqU1uBo715LHCaBY=; b=TSifFNpbWVaSbocgbJqHZewddPfcZesa9kyPOTIC6gMpuWzl+nCZJih++rOV03ZIxVM1VA Khd9HQwIIEHNiD5UPdzZtfQtDXoDpAxCL19qZIvaTALdjHpTufUDOyMR8t9FvCDzv34iE5 nty8xtfZXghscCX84l/i9qcjwyoSoL4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694461836; a=rsa-sha256; cv=none; b=NC/bk1IiTXSSwEOcuaC/fOGWvB/K0ilAjPti+ZVwySWTHaeEfo/dyTHmLJqCZVelsbJd/z 1YBK87XGVSJZ6KUd8SWc1e8WvCkGgDwRHCqml4OrGdE1pWARrgJSEMCZAzVwoxhnxHlaRm K6Mrz77CnsMJP5Jwqico2TeohNZvUi4= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=D8j56+fx; spf=pass (imf02.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.176 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org Received: by mail-qk1-f176.google.com with SMTP id af79cd13be357-76dcf1d8957so290912985a.1 for ; Mon, 11 Sep 2023 12:50:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1694461836; x=1695066636; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DXpNvTPTT9ib/tT2NTgBNzi3cETEqU1uBo715LHCaBY=; b=D8j56+fxZMdPv2m6ft7+OkZsS1Kb6FzQvuqq7WafCFewIinoaoy0lbf9Vrg9EMZPD0 XzqG03OP3g478g+YdaObzw12B050axR7hkdGThldqt9U7FCHOac82RPxmeXjZu9nzd3c Ila33PkhUQKcJ2vLFt7RPLZBQJ61eQyZ/aDEKNZ4D+84+FZwRSf64toXHSA53oG+0kf7 +rfoRLDxxsRpAraeMAc7dvS33E7uShMIR3kRj+cE9kKOrF0OcCVidCk2zPU9Oq2HFKdq /BDqJJWsNo68pPFWUfegeF5qedmV995udVt6ia3/jVh8CcIssbJah2grN/L4vDtB6CN8 yJyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694461836; x=1695066636; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DXpNvTPTT9ib/tT2NTgBNzi3cETEqU1uBo715LHCaBY=; b=MeKBlyhLR8xdgVxvMQ+zvk+w820EHWfkHJzFmLdVO3cCrKuVZn6nZjUbbpPcbJ5TIw Uff5HlNf7lqJ4Gbzv44xQnQgm+RrrbH1r+5rrFCgx+tlyTK9Xb9DJtNq7CTEfoJNwyVE mjLhBkm3rf6PD7M/r5luBdBo/34zmyo3Hp04jqEOlf+JnykrPQ6ah80YF3MU6YkUAyfM yWPYjye09Fb6q4UMLr5X+4kcaX1A9Wp04clky9uLn2X5zgw4YR1flhBjlj35hVHqGsnX LsMzvQyJVsJllm1feBS+9qBMPcTnxlQCHOxfxOcBa19wSeBTG/DhKpYY44/3Slis85Sq ZEbA== X-Gm-Message-State: AOJu0Yy5YmXltteyTlu1KfoCHtvM1RVcYJ7Xw4JgrPlWqIfhvTnyEpmo QqVastvMV7N0VywRNHy+2Oes0w== X-Google-Smtp-Source: AGHT+IFL0UuJsNEAxd5SAPx+lZ3HgMHaH/IGI5QCwmbwAzaYO1UWU+4wttoCCDMIY1y70kB2Wm1/ug== X-Received: by 2002:a05:620a:991:b0:76e:f279:4c36 with SMTP id x17-20020a05620a099100b0076ef2794c36mr9707748qkx.29.1694461835893; Mon, 11 Sep 2023 12:50:35 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-3012-16a2-6bc2-2937.res6.spectrum.com. [2603:7000:c01:2716:3012:16a2:6bc2:2937]) by smtp.gmail.com with ESMTPSA id 20-20020a05620a071400b00767d6ec578csm2724636qkc.20.2023.09.11.12.50.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Sep 2023 12:50:35 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Miaohe Lin , Kefeng Wang , Zi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/6] mm: page_alloc: move free pages when converting block during isolation Date: Mon, 11 Sep 2023 15:41:44 -0400 Message-ID: <20230911195023.247694-4-hannes@cmpxchg.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230911195023.247694-1-hannes@cmpxchg.org> References: <20230911195023.247694-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Stat-Signature: 4gfri8hcyjqsqhmsg3qhm5ktmhyj5x7t X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: A276380006 X-Rspam-User: X-HE-Tag: 1694461836-450816 X-HE-Meta: U2FsdGVkX1+FBSx0VNL5DlBS+r/fo0wRCSpZHi+Vk4BOBuryCdMtyhRNPA9RVVsWeDvD/3vxBXr3SZDpkjIwQy52gNYGYI+IscOskCMeLFGC2pS2XacMmvjRIcR2qNTsO0a02LTvurIezRGuDiJ3fTsG28zQScD8hSEmRxOppbTXmtmW1Iq2psKAe0I+//sstwWyMdkWsOeWYI2oOBc9iFMr9sd4cUqJFvm8cSnR0mPGicFdcT2e0yGO5VTsIWP1iyMEI737JWBQioTHFwtSyQWPC7f7u8xuFcZ5O3l7n2LoLo3wh2ujoLe0YnvsEjKBA8xudOvou9RU58vLuSfENSp+cQ0tuooyGmSkC9nE5D5cXsAhpUSL7Rv1XYdAwLBTTbeamLTQ5TbIkoIouIBt4jDp6w+dGPp5e5A8cTjKLAOwCYoXdToEaPYBojg5/xfCLnf91zMUGWaQMc8eTE5G8pvTg8PLECLIdAyGW8mSEwvteRNlW+aP/T+OUs8/4EQX8bMDjI/OxEtW9L1mMYnJrQ6QnLDCzGRinUsugRcU3/XgEGoqiI8EY+F2AVeC2XBq7tB96SlFVH4sC0QTEqR2SN86fcEpgksa9f7tv4va61mZegiA6jLZciu+NHmPgOMyyFVEIuHkiNcWdumSWLEF+A+9kPlD9u9yW9RIAvjpS7+IqwiUfoWCeD/ZR/CCgl+/GTXYTreF2ngVkP8+ZRBAPJVijuO2hcF9LMq+pbASw0MsOomfk8KWNALYbqZM4PWTCbLv+YbQYtOjEHLOGgFGWtye50G5hso2kg+vY9Nbk8x4/4VSzqLf3n3i+KUYgVySj7Y52wtKl7KWXd8XRuJr24pxD3kbReOfTQzaqOISqDSagA0lwRt/ORbz7C5fkks4ljRO03fx7z0iXkLyd9+D/agzBbDvfI70HkOxk7kSx1KrZlZvIRb7nUHkxOHyDp2zklzF95R/INazv8msCn4 lFVCdocd Glx2ROzT+95EfmuO/zj2cGIpyyNJeAMMCdwPFuV3hQjiM6XWMSoTUcYI/0AaTqchI7kpEa4/BBEXmRJgVWxhx3SKXR9Jky2rIawmQ7mM1HO1xp84O1csnXV+swxgfjzVMq3tExGyyjCfEWANHr+Oi9SSKLraHDtxAyfepRorCR/rnqBn9FWmpIGLGpOVL6GYK/UfqXptozj1RvpCBQY9CVzhTWsHREsek1DuX8ktIh12Ab+pkHf5WhB+A3PWtQMIcNbtpDUtjN64nxjiPlnIirT400ZL6jdve172M1JgmQcl+Ibb0URaacEoZuxajhaiC1cjzAjkJACCDbcQW+pxvRkkuDRep7OH1NLpLfqGoWJWDq/5vGYvjil7af4IngxliSOnZXhgKSErgDdc+IPVuZjIAPoJ4olqZrMpIKB9GQJYuiU5Ntyi6Eq29/0+K6wQzYriMz1dwJeYfKmENUzfj7OgKLHihkbqtQw3BE6ANg1KzXdY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When claiming a block during compaction isolation, move any remaining free pages to the correct freelists as well, instead of stranding them on the wrong list. Otherwise, this encourages incompatible page mixing down the line, and thus long-term fragmentation. Signed-off-by: Johannes Weiner Reviewed-by: Zi Yan Reviewed-by: Vlastimil Babka Acked-by: Mel Gorman --- mm/page_alloc.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3db405414174..f6f658c3d394 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2548,9 +2548,12 @@ int __isolate_free_page(struct page *page, unsigned int order) * Only change normal pageblocks (i.e., they can merge * with others) */ - if (migratetype_is_mergeable(mt)) + if (migratetype_is_mergeable(mt)) { set_pageblock_migratetype(page, MIGRATE_MOVABLE); + move_freepages_block(zone, page, + MIGRATE_MOVABLE, NULL); + } } } From patchwork Mon Sep 11 19:41:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13379609 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E972C71153 for ; Mon, 11 Sep 2023 19:50:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ECA196B02E0; Mon, 11 Sep 2023 15:50:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E792D6B02E1; Mon, 11 Sep 2023 15:50:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D1A9B6B02E2; Mon, 11 Sep 2023 15:50:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id BBD766B02E0 for ; Mon, 11 Sep 2023 15:50:39 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8EA74C0B3E for ; Mon, 11 Sep 2023 19:50:39 +0000 (UTC) X-FDA: 81225358998.11.683DD1D Received: from mail-qt1-f171.google.com (mail-qt1-f171.google.com [209.85.160.171]) by imf29.hostedemail.com (Postfix) with ESMTP id C4A39120002 for ; Mon, 11 Sep 2023 19:50:37 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=Ucg9oN8B; spf=pass (imf29.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.171 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694461837; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XbiqGXh0AoO6pfd7jMLvMSWe/voOxpwIaHwDz3nXxMQ=; b=ZIRlCadxpgpUZglU9arJxCGHcHmdGuzjWDGMJxEd10pv2pY3WmRw5UPOnUVqUd72tZtY+d Uvxt8XzlFt57xl9yOQg0fySiQEzyhAs3ZKQasleshKSJ7QxDgSU1/m1f/f3Nb/i1S0lOZh ZqGZPomVdXoboYRZRxrvOYFTg4Glirc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694461837; a=rsa-sha256; cv=none; b=POe6LItJjcqh0UoUtRC97soWc9TA+aveWfkka4+0jCYJdllmmFG2dgjtsFW/eL3/Sc3mbi LEtMTEDJQ9ISQ8sAjC0yvHzMwZ+wVw9+8/IBbhWdUwQQF0/w6UO2W/ZWsabwtRMweoOleP jLCzOMQDiGQu57vtbCAEXiAcVzcDBd8= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=Ucg9oN8B; spf=pass (imf29.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.171 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org Received: by mail-qt1-f171.google.com with SMTP id d75a77b69052e-4121f006c30so30841941cf.2 for ; Mon, 11 Sep 2023 12:50:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1694461837; x=1695066637; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XbiqGXh0AoO6pfd7jMLvMSWe/voOxpwIaHwDz3nXxMQ=; b=Ucg9oN8BsXJhpsSFsEjCJ2EajFDQIQYwyXxVSkvoqNtUzrKL8A0sapJtwih75imbTj AjOZjW4uuLrJlIx5r4bTyZdXs0i48rHSqlhp+GfUhrOEwaKUEqkeJH64jfGHsRPlEiU7 Q0WKn45V+CEeb+Ue0H2mf2Y830uyV771qR0K7mRRItKjhrVv5DUkBcB+85JiEUWJYe+h sWPkoCJT9MPNTr42ByNtNYsDqT180G9tvo4XY+NYeOBzM2AldOLBrvhSlNAd8YFa8uiH bVmrgQxFyKLIpQRPygQBqj+AMEvZ3WR5NUbVxj0/VTFtUd6YrFnan627qzOlYrU8SUA5 N6eQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694461837; x=1695066637; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XbiqGXh0AoO6pfd7jMLvMSWe/voOxpwIaHwDz3nXxMQ=; b=X6iNv/FBlqZqmOL3UJFL3H8LCLHMRsonu0uo/21nwLUTj1hqKaMOYfvrfG1ltBZs9p zBiACq1FDWg41sz51lIiOgaoW0NPGZdPkdedZaof/L5iTGgN0S5n75/nAjFP+qAZqXQm WCBV9LHV0SrS8fdS9/T0qrErTXXBSzKuzfNBxyLf9SGw7EKRxFBIvU7FS+UQxd+BW7Yu e/LV6Kwlz3ERRhPeQVEPKK0rx5b6MMvRIWb87pimvc3hEjnYS6izsge2iU6lXA84WMNP 1nWGbwwUBiLxpyYZPhm98+pESfT9hU0ESWk32rbLNYPJl9XvZRUB3G6sf0Nnx5SLb+71 eUJg== X-Gm-Message-State: AOJu0YyqxvZFUfs1fdKRKq5oCqjQ5vNNjpj9v7XoiOJ1uv9jMcsbLrBn LBS4jTwk0aIUi54U93rb3pkvCA== X-Google-Smtp-Source: AGHT+IGUAHLS2HPzXYDw7iJOOB1AIzY0p9GHTHFFHLSgWhRs7EOMIgK8ZOejhK0Em8gshplLrGFwBA== X-Received: by 2002:ac8:5986:0:b0:40d:4c6:bcdb with SMTP id e6-20020ac85986000000b0040d04c6bcdbmr13002837qte.5.1694461836984; Mon, 11 Sep 2023 12:50:36 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-3012-16a2-6bc2-2937.res6.spectrum.com. [2603:7000:c01:2716:3012:16a2:6bc2:2937]) by smtp.gmail.com with ESMTPSA id z17-20020ac84551000000b004108f6788a6sm2825736qtn.41.2023.09.11.12.50.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Sep 2023 12:50:36 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Miaohe Lin , Kefeng Wang , Zi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/6] mm: page_alloc: fix move_freepages_block() range error Date: Mon, 11 Sep 2023 15:41:45 -0400 Message-ID: <20230911195023.247694-5-hannes@cmpxchg.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230911195023.247694-1-hannes@cmpxchg.org> References: <20230911195023.247694-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Stat-Signature: kqkg1m3xoq5h878fna35oxty5gwtra8y X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: C4A39120002 X-Rspam-User: X-HE-Tag: 1694461837-214884 X-HE-Meta: U2FsdGVkX18MEn9mmKTElYeHQzjhnABPRPlw3R6XGZ5OUjuzRbqPsmAW4FddcOqvkSN75A80CcyYepp2NCT2zqHPlkE6iDEX/uzrgzMoxkcgohasd3EWTThcTvZX5XGczzZQtew39iBGzeX4wW03rspXLxfFHCrDxHhT74yXpn31lY1K8Qp9O6h27gqNNrqUdccw+vTrXLmvfVKy9rIXPHlG1Cp0Krhmqs5V3Htni4tDVo5D2+8nvxlq/2ndskupzfhi8/nYtQ+tGJTRTkT1Ak9P1mqmdKDE3p4e8Xd3TpQjjEGn0pCn1te9Oku5KrIkJqSCfZ0QxC51h7SftmZHlcCsa60sbpxQE7sqArhWOzJVUiuIDRsd12e+jBigqv3tWRIV5b6OiCLYJC/afNV3P8SJXh1B+EPpkdoXV2BDPI3A2iXFsdA7rgrjos8VYAC+faVYMhbb/LfyFhnzhMcdQohQCSvba6NkH5mtDTbcsQ7t7vobys3t6TDD8phVbfOD+QU0dTqGVheYTeShC21U4NtDqYtQPLTs/V33ryrwpIqmchBNDCOd6Bi6RaT/zBlkI27/EmYziIXoHT9sNHtkp+fFktYLAL6yOwMu9hSf6Ia18NblKMldc+J4DOJ/TVIEFCA0Wp4Qd8TJBOD4lBRs6AFsG2niAtjpm0062PFwVPJeO597duCIAfirTDWZGw+I/VFyLFbd8oGWS6biE9zERjDZqQssLkxw6VKGNL4K8b+S3cLbIFLsndjj0E3RUtGRrgE4HxAALdNTF4wpbFMHJ2HYk80BNOQpyy9bBA+Z9cCwP+nZep/WbbX1GSXLi9SYIABUZYXO9xWoVc4GeR2p6gURt70F/r4dJmm5wFi7A8W2NFZqpSkA7/dJxscocRwgKRTI5tExlivOrPFwM75VTbW/lXt3eqCAY7LZhmrri4H5buNhLss+7oubq12iJrxqDW6dUJXPEGF1uyJaIth Vk/Pmrtv xTDiQ49t8KNp98SjBsE9Bnvw8l8P09uCPwFiYBZAaPOzPyk4KFKqMQ231opCOIDJaHydGlpXathiBAo54fcMDxc9paUfhxTKRsDwBGsGqDG0TnTER+trID4LX/gcLRXVgY/Pu6qG4I3AnxJjZM5iUyoO4Kgb75dY98PX552aDJE60MUx9p9XaWA2x0mnV7qariP7QGg46pDP4CZ4XOHjUzVKXGizDB62DsCbJMqJqFp6QAliQ6BwjFaJJraHVMUBPOQWCtngUvuxUaniDvCkXbaewiA0UQshVV4winv/sSxaL8cfjnBfuQdAXGE5SOhElHRKtOYDA/JV+bWDUaga0BUrcN+dXRr4ai+CXfbGacsHKoIajPCw1OkwZHps3AY5UDq6ltLfFM2oHVPRMQCAA3Ts6KuicqmJtLKajXoBdS8en+CHiUPVKxuCET+FREOGSdGozq+mKGK0EqjUE8iJ4DH3pFtKWOpJKEZ0n9IytFOYfO8s= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When a block is partially outside the zone of the cursor page, the function cuts the range to the pivot page instead of the zone start. This can leave large parts of the block behind, which encourages incompatible page mixing down the line (ask for one type, get another), and thus long-term fragmentation. This triggers reliably on the first block in the DMA zone, whose start_pfn is 1. The block is stolen, but everything before the pivot page (which was often hundreds of pages) is left on the old list. Signed-off-by: Johannes Weiner Reviewed-by: Vlastimil Babka Acked-by: Mel Gorman --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f6f658c3d394..5bbe5f3be5ad 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1652,7 +1652,7 @@ int move_freepages_block(struct zone *zone, struct page *page, /* Do not cross zone boundaries */ if (!zone_spans_pfn(zone, start_pfn)) - start_pfn = pfn; + start_pfn = zone->zone_start_pfn; if (!zone_spans_pfn(zone, end_pfn)) return 0; From patchwork Mon Sep 11 19:41:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13379610 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F265AEE57DF for ; Mon, 11 Sep 2023 19:50:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3CD046B02E2; Mon, 11 Sep 2023 15:50:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 37C916B02E3; Mon, 11 Sep 2023 15:50:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A7C66B02E4; Mon, 11 Sep 2023 15:50:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id F24586B02E2 for ; Mon, 11 Sep 2023 15:50:40 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C85E1120A68 for ; Mon, 11 Sep 2023 19:50:40 +0000 (UTC) X-FDA: 81225359040.28.DA37A3B Received: from mail-qv1-f49.google.com (mail-qv1-f49.google.com [209.85.219.49]) by imf12.hostedemail.com (Postfix) with ESMTP id E503040003 for ; Mon, 11 Sep 2023 19:50:38 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=mEFslNDB; spf=pass (imf12.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.49 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694461839; a=rsa-sha256; cv=none; b=dPPyl3o5oDD2UwrmDZpO89K1OMXuOC8IGjo5PY3RlknMf6W91oDpmNlwH4micA5JftDRfi KjV8ore3fUAOVlhquHd7gEwz2BdQ3WdYTLjxgEIm2klR8g0b9CrSnE0uNzPaV+G9ShwSI6 bG4bwYwmnxVd2c5+TDI8TZVNgtKV0aQ= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=mEFslNDB; spf=pass (imf12.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.49 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694461838; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YU89p8TG5c7Q3Tc81fgPY+LY6pa0Rdp9Sv7NpbIR5yI=; b=ziBVk71H3xW58Nh6aGrtnPE3ZtszORIaznoa0x2f3IvQdWtXDf3gov+VmQu/wlsW5pZEp3 rqaxROKWNYU8uucDseG86kuobkKHGlgTk/Zc46kN0MK1j1YPrHMA9Vtq/VqZEDWgdnUJA/ UU+QZ9jOci9kEysYhd5bqw0jimIb+AY= Received: by mail-qv1-f49.google.com with SMTP id 6a1803df08f44-64cca551ae2so31102176d6.0 for ; Mon, 11 Sep 2023 12:50:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1694461838; x=1695066638; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YU89p8TG5c7Q3Tc81fgPY+LY6pa0Rdp9Sv7NpbIR5yI=; b=mEFslNDBDgIy3gu2CuCXaGClqVcmnLdTDiFuQwUNy30NneJ8SdLLiB5wE2FK/0BXUJ xjkLKkNiWNk6KSNlyhKJDuh8eWA1M+RaDdFiT/JroSj0oJjKUBgBdo3g52rKAGvkPhIQ Jp1juKcCTQCbZoqhFtd7z1M2qJ0C0QaMjh9P+s22KdgMuPJpif5KDUEV4OIhl/AJdlA0 E0CRW1bVrIO/5gQ7Rc2oE6q2CGE3Osezt+pDMrQbxGlA+x87M2jTyZwycwughFBcQArh 4Whf3dV25k/4MxRu8CifO9Na2NMgDaF1CNhmqjzsZEgx4sir7AUK1GuhQQqv/VpdBOT4 XD6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694461838; x=1695066638; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YU89p8TG5c7Q3Tc81fgPY+LY6pa0Rdp9Sv7NpbIR5yI=; b=v4clnqFi+g+fyRc5Wbx04lfgmn5NCkBr4KX8W3THqpB6OeJCyQTnUsKdjkX+LPKMVV 7MA56j0mASgU70enqDU59yneyQQ1Yf7+ZQKMKjRKtijHbfppObgW/kCIlbvVyKGRWAWO jccnxs//VFLLhmWUFM8n/Tgn3h/AEKvwVUE7Zbz1teDJQCrWkdZFlw1tfR6caPZFygRB /bOfmTyPEqjE57qnFDC37MbEiscguuIH6A3tzwpj2ZMKQRxQITBwi77eM4KkLiPQE7Fp 36vgMRcQUi+5t4LcicBw4wtY0BrmG+SZyvPxYLuvKIViEoAETdAMLryWMMIdLtnNdJea tRVw== X-Gm-Message-State: AOJu0YwuSeB9WlVL+4a2qTP3UMH3scw/TrRsznGSETjM9TC++Nrz90gH wUd1rnFDTy96GnEalemtJ1blOQ== X-Google-Smtp-Source: AGHT+IH4M19lI0SDJVY6f78Lb3HcLcJw1JfP6kv4TxQN92+T5Gq5JV7EW0oHeoj0yCZP9Qf0W9+btw== X-Received: by 2002:a0c:8cc1:0:b0:653:5b5e:7a96 with SMTP id q1-20020a0c8cc1000000b006535b5e7a96mr10400951qvb.1.1694461838052; Mon, 11 Sep 2023 12:50:38 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-3012-16a2-6bc2-2937.res6.spectrum.com. [2603:7000:c01:2716:3012:16a2:6bc2:2937]) by smtp.gmail.com with ESMTPSA id o3-20020a0ce403000000b00653589babfcsm3146243qvl.132.2023.09.11.12.50.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Sep 2023 12:50:37 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Miaohe Lin , Kefeng Wang , Zi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 5/6] mm: page_alloc: fix freelist movement during block conversion Date: Mon, 11 Sep 2023 15:41:46 -0400 Message-ID: <20230911195023.247694-6-hannes@cmpxchg.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230911195023.247694-1-hannes@cmpxchg.org> References: <20230911195023.247694-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: E503040003 X-Stat-Signature: 1nibe9gu8f6wmd6a9it88op55tpomaco X-Rspam-User: X-HE-Tag: 1694461838-436693 X-HE-Meta: U2FsdGVkX19aKhSPIV8pz2clKaudtgMrBt8I/UsrBR9bNnmRBETpaXLi5EPDFdwuu/Mn4y/+yNxm79jhtVzCU2uq76b/ePiHNmKI2w4UD1xlS0mMKEZCLkaq6M0zUvcLkBfrQKUdWQ0fzFUXKTm8zMpiyLtNfJkxErTY++7ytOccvhzBu6882QBLtRpD6eVHI2qVF0gEau7KlvWNWrxw3svll9SlyeRJbvYjrC6M/DVbZWB/pv7w3XuknNFjSwC7RCt/xpi1RNbDZt9YESoTzULion5gmnCf80iEYTZ8c7l51gxUQmnbu7+kHpQEyrHfXpKBomt3zCzjGlC5Pdjy/52YvxqO/gMuxiMEC9NOcvaH7rxUL8uUdKbC3457LuRkv2uC9cuJjH8C2X0FafJKRQYMPZ0IMOXeKxkYLniQ+pCAsQN5v1R8gRz7TwBRH4lLs91CMozP9GvRJazGQJyI7LdGYoiNeE4pOb7daQKM+tHd4Z0q6iSQ0DuAHZQh5CXx9b0m5WZ6DUxjjIRV881jBEW1Tuv4/OjytHTgCJY2RcrJwnnyrVdOi+GT4wB2AZ6RUsZnIuLBVEhFdFfhe/IO6sps1pSC+JFBJqYI2TIxw3FS2Xz2UXHi5WWXtifBZ0ObVlHVWHB2mq0+8bZgknWGfPBtIPtn4yGZjDVcrLbYW7ibCAOZUR6jNVlVuslScIqfBRbWVroYQ0rqJV4WgxXzMGOW+2HOBkPXQDZjyrbOuG3sX386tY0yXU/P0lkjhOAPY7cqDi3qKyTRZYohfZ6N3LSEkxhpxWl8NTUU5+urw4XtO+dav3mGAjZGyBnSnHp/hdqo7i/p0gElqebxP9XjE5Tjg/61yfvmmhqZXjmLjpFKnHJ8Lm4fIhqUIN24P7oOQtdg9CCZE/qYitZ/Udt79WHalIP6pZU6bi41FUOAnTNSianUYnU5zPzJpE4NZ0cHc7e74xxnEvqs5F8WzLV PNGSWqRm KA6LavZmoLn01d5O2kBpcRAFuDye0Ig09JiOplRVlPYoySAOfEBpBbSRnk1hikcjGng+yUZ6omfV6LbKUltEDTrkZHivj7wFaQOmeE3RF2NlxVswZ0vzWw57e7yU93hVUuwBn/AMqWCBBw7nDceR4VZ+negtLi/XdUqjD9XVeEB2+k+UYrIbAfPdr1AP4L1ZyQLpbtprCKR/brLzkHqI5iSxrrsFruk6JZ2eOekHz+PeOVMhBSuEGyrYDh0p6c8tFQp13XlwpaDYak8aEdQk0QF+yGMJUsLiLu+wNoQF2mNO4viDHWGIHNRek0wYqYDKNhLHWD7mhccZrs93b0VT8QjApYjTrr2i2E2S/jGsVmlodqIqbh09GT75Aw/SKyjsD+DVkpT/r3PtRuZ+OzccBRUAuEavpN6UtdQdP0ix5q/u/suDAES0e4Vw1cPMCRq2yuadpqHeuezAYUKclZLVZDpvIEY3jOFHyEtzPBVadLo1Q1ns= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, page block type conversion during fallbacks, atomic reservations and isolation can strand various amounts of free pages on incorrect freelists. For example, fallback stealing moves free pages in the block to the new type's freelists, but then may not actually claim the block for that type if there aren't enough compatible pages already allocated. In all cases, free page moving might fail if the block straddles more than one zone, in which case no free pages are moved at all, but the block type is changed anyway. This is detrimental to type hygiene on the freelists. It encourages incompatible page mixing down the line (ask for one type, get another) and thus contributes to long-term fragmentation. Split the process into a proper transaction: check first if conversion will happen, then try to move the free pages, and only if that was successful convert the block to the new type. Signed-off-by: Johannes Weiner --- include/linux/page-isolation.h | 3 +- mm/page_alloc.c | 171 ++++++++++++++++++++------------- mm/page_isolation.c | 22 +++-- 3 files changed, 118 insertions(+), 78 deletions(-) diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 4ac34392823a..8550b3c91480 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -34,8 +34,7 @@ static inline bool is_migrate_isolate(int migratetype) #define REPORT_FAILURE 0x2 void set_pageblock_migratetype(struct page *page, int migratetype); -int move_freepages_block(struct zone *zone, struct page *page, - int migratetype, int *num_movable); +int move_freepages_block(struct zone *zone, struct page *page, int migratetype); int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, int migratetype, int flags, gfp_t gfp_flags); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5bbe5f3be5ad..a902593f16dd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1601,9 +1601,8 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone, * Note that start_page and end_pages are not aligned on a pageblock * boundary. If alignment is required, use move_freepages_block() */ -static int move_freepages(struct zone *zone, - unsigned long start_pfn, unsigned long end_pfn, - int migratetype, int *num_movable) +static int move_freepages(struct zone *zone, unsigned long start_pfn, + unsigned long end_pfn, int migratetype) { struct page *page; unsigned long pfn; @@ -1613,14 +1612,6 @@ static int move_freepages(struct zone *zone, for (pfn = start_pfn; pfn <= end_pfn;) { page = pfn_to_page(pfn); if (!PageBuddy(page)) { - /* - * We assume that pages that could be isolated for - * migration are movable. But we don't actually try - * isolating, as that would be expensive. - */ - if (num_movable && - (PageLRU(page) || __PageMovable(page))) - (*num_movable)++; pfn++; continue; } @@ -1638,26 +1629,62 @@ static int move_freepages(struct zone *zone, return pages_moved; } -int move_freepages_block(struct zone *zone, struct page *page, - int migratetype, int *num_movable) +static bool prep_move_freepages_block(struct zone *zone, struct page *page, + unsigned long *start_pfn, + unsigned long *end_pfn, + int *num_free, int *num_movable) { - unsigned long start_pfn, end_pfn, pfn; - - if (num_movable) - *num_movable = 0; + unsigned long pfn, start, end; pfn = page_to_pfn(page); - start_pfn = pageblock_start_pfn(pfn); - end_pfn = pageblock_end_pfn(pfn) - 1; + start = pageblock_start_pfn(pfn); + end = pageblock_end_pfn(pfn) - 1; /* Do not cross zone boundaries */ - if (!zone_spans_pfn(zone, start_pfn)) - start_pfn = zone->zone_start_pfn; - if (!zone_spans_pfn(zone, end_pfn)) - return 0; + if (!zone_spans_pfn(zone, start)) + start = zone->zone_start_pfn; + if (!zone_spans_pfn(zone, end)) + return false; + + *start_pfn = start; + *end_pfn = end; + + if (num_free) { + *num_free = 0; + *num_movable = 0; + for (pfn = start; pfn <= end;) { + page = pfn_to_page(pfn); + if (PageBuddy(page)) { + int nr = 1 << buddy_order(page); + + *num_free += nr; + pfn += nr; + continue; + } + /* + * We assume that pages that could be isolated for + * migration are movable. But we don't actually try + * isolating, as that would be expensive. + */ + if (PageLRU(page) || __PageMovable(page)) + (*num_movable)++; + pfn++; + } + } - return move_freepages(zone, start_pfn, end_pfn, migratetype, - num_movable); + return true; +} + +int move_freepages_block(struct zone *zone, struct page *page, + int migratetype) +{ + unsigned long start_pfn, end_pfn; + + if (!prep_move_freepages_block(zone, page, &start_pfn, &end_pfn, + NULL, NULL)) + return -1; + + return move_freepages(zone, start_pfn, end_pfn, migratetype); } static void change_pageblock_range(struct page *pageblock_page, @@ -1742,33 +1769,36 @@ static inline bool boost_watermark(struct zone *zone) } /* - * This function implements actual steal behaviour. If order is large enough, - * we can steal whole pageblock. If not, we first move freepages in this - * pageblock to our migratetype and determine how many already-allocated pages - * are there in the pageblock with a compatible migratetype. If at least half - * of pages are free or compatible, we can change migratetype of the pageblock - * itself, so pages freed in the future will be put on the correct free list. + * This function implements actual steal behaviour. If order is large enough, we + * can claim the whole pageblock for the requested migratetype. If not, we check + * the pageblock for constituent pages; if at least half of the pages are free + * or compatible, we can still claim the whole block, so pages freed in the + * future will be put on the correct free list. Otherwise, we isolate exactly + * the order we need from the fallback block and leave its migratetype alone. */ static void steal_suitable_fallback(struct zone *zone, struct page *page, - unsigned int alloc_flags, int start_type, bool whole_block) + int current_order, int order, int start_type, + unsigned int alloc_flags, bool whole_block) { - unsigned int current_order = buddy_order(page); int free_pages, movable_pages, alike_pages; - int old_block_type; + unsigned long start_pfn, end_pfn; + int block_type; - old_block_type = get_pageblock_migratetype(page); + block_type = get_pageblock_migratetype(page); /* * This can happen due to races and we want to prevent broken * highatomic accounting. */ - if (is_migrate_highatomic(old_block_type)) + if (is_migrate_highatomic(block_type)) goto single_page; /* Take ownership for orders >= pageblock_order */ if (current_order >= pageblock_order) { + del_page_from_free_list(page, zone, current_order); change_pageblock_range(page, current_order, start_type); - goto single_page; + expand(zone, page, order, current_order, start_type); + return; } /* @@ -1783,10 +1813,9 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, if (!whole_block) goto single_page; - free_pages = move_freepages_block(zone, page, start_type, - &movable_pages); /* moving whole block can fail due to zone boundary conditions */ - if (!free_pages) + if (!prep_move_freepages_block(zone, page, &start_pfn, &end_pfn, + &free_pages, &movable_pages)) goto single_page; /* @@ -1804,7 +1833,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, * vice versa, be conservative since we can't distinguish the * exact migratetype of non-movable pages. */ - if (old_block_type == MIGRATE_MOVABLE) + if (block_type == MIGRATE_MOVABLE) alike_pages = pageblock_nr_pages - (free_pages + movable_pages); else @@ -1815,13 +1844,15 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, * compatible migratability as our allocation, claim the whole block. */ if (free_pages + alike_pages >= (1 << (pageblock_order-1)) || - page_group_by_mobility_disabled) + page_group_by_mobility_disabled) { + move_freepages(zone, start_pfn, end_pfn, start_type); set_pageblock_migratetype(page, start_type); - - return; + block_type = start_type; + } single_page: - move_to_free_list(page, zone, current_order, start_type); + del_page_from_free_list(page, zone, current_order); + expand(zone, page, order, current_order, block_type); } /* @@ -1885,9 +1916,10 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone) mt = get_pageblock_migratetype(page); /* Only reserve normal pageblocks (i.e., they can merge with others) */ if (migratetype_is_mergeable(mt)) { - zone->nr_reserved_highatomic += pageblock_nr_pages; - set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); - move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL); + if (move_freepages_block(zone, page, MIGRATE_HIGHATOMIC) != -1) { + set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); + zone->nr_reserved_highatomic += pageblock_nr_pages; + } } out_unlock: @@ -1912,7 +1944,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, struct zone *zone; struct page *page; int order; - bool ret; + int ret; for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->highest_zoneidx, ac->nodemask) { @@ -1961,10 +1993,14 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, * of pageblocks that cannot be completely freed * may increase. */ + ret = move_freepages_block(zone, page, ac->migratetype); + /* + * Reserving this block already succeeded, so this should + * not fail on zone boundaries. + */ + WARN_ON_ONCE(ret == -1); set_pageblock_migratetype(page, ac->migratetype); - ret = move_freepages_block(zone, page, ac->migratetype, - NULL); - if (ret) { + if (ret > 0) { spin_unlock_irqrestore(&zone->lock, flags); return ret; } @@ -1985,7 +2021,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, * deviation from the rest of this file, to make the for loop * condition simpler. */ -static __always_inline bool +static __always_inline struct page * __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, unsigned int alloc_flags) { @@ -2032,7 +2068,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, goto do_steal; } - return false; + return NULL; find_smallest: for (current_order = order; current_order <= MAX_ORDER; @@ -2053,14 +2089,14 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, do_steal: page = get_page_from_free_area(area, fallback_mt); - steal_suitable_fallback(zone, page, alloc_flags, start_migratetype, - can_steal); + /* take off list, maybe claim block, expand remainder */ + steal_suitable_fallback(zone, page, current_order, order, + start_migratetype, alloc_flags, can_steal); trace_mm_page_alloc_extfrag(page, order, current_order, start_migratetype, fallback_mt); - return true; - + return page; } /* @@ -2087,15 +2123,14 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, return page; } } -retry: + page = __rmqueue_smallest(zone, order, migratetype); if (unlikely(!page)) { if (alloc_flags & ALLOC_CMA) page = __rmqueue_cma_fallback(zone, order); - - if (!page && __rmqueue_fallback(zone, order, migratetype, - alloc_flags)) - goto retry; + else + page = __rmqueue_fallback(zone, order, migratetype, + alloc_flags); } return page; } @@ -2548,12 +2583,10 @@ int __isolate_free_page(struct page *page, unsigned int order) * Only change normal pageblocks (i.e., they can merge * with others) */ - if (migratetype_is_mergeable(mt)) { - set_pageblock_migratetype(page, - MIGRATE_MOVABLE); - move_freepages_block(zone, page, - MIGRATE_MOVABLE, NULL); - } + if (migratetype_is_mergeable(mt) && + move_freepages_block(zone, page, + MIGRATE_MOVABLE) != -1) + set_pageblock_migratetype(page, MIGRATE_MOVABLE); } } diff --git a/mm/page_isolation.c b/mm/page_isolation.c index bcf99ba747a0..cc48a3a52f00 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -178,15 +178,18 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ unmovable = has_unmovable_pages(check_unmovable_start, check_unmovable_end, migratetype, isol_flags); if (!unmovable) { - unsigned long nr_pages; + int nr_pages; int mt = get_pageblock_migratetype(page); + nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE); + /* Block spans zone boundaries? */ + if (nr_pages == -1) { + spin_unlock_irqrestore(&zone->lock, flags); + return -EBUSY; + } + __mod_zone_freepage_state(zone, -nr_pages, mt); set_pageblock_migratetype(page, MIGRATE_ISOLATE); zone->nr_isolate_pageblock++; - nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE, - NULL); - - __mod_zone_freepage_state(zone, -nr_pages, mt); spin_unlock_irqrestore(&zone->lock, flags); return 0; } @@ -206,7 +209,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ static void unset_migratetype_isolate(struct page *page, int migratetype) { struct zone *zone; - unsigned long flags, nr_pages; + unsigned long flags; bool isolated_page = false; unsigned int order; struct page *buddy; @@ -252,7 +255,12 @@ static void unset_migratetype_isolate(struct page *page, int migratetype) * allocation. */ if (!isolated_page) { - nr_pages = move_freepages_block(zone, page, migratetype, NULL); + int nr_pages = move_freepages_block(zone, page, migratetype); + /* + * Isolating this block already succeeded, so this + * should not fail on zone boundaries. + */ + WARN_ON_ONCE(nr_pages == -1); __mod_zone_freepage_state(zone, nr_pages, migratetype); } set_pageblock_migratetype(page, migratetype); From patchwork Mon Sep 11 19:41:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13379611 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52318C71153 for ; Mon, 11 Sep 2023 19:50:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B4B26B02E4; Mon, 11 Sep 2023 15:50:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5C6DF6B02E5; Mon, 11 Sep 2023 15:50:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 330366B02E6; Mon, 11 Sep 2023 15:50:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 1B4796B02E4 for ; Mon, 11 Sep 2023 15:50:42 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id E921D809C2 for ; Mon, 11 Sep 2023 19:50:41 +0000 (UTC) X-FDA: 81225359082.15.6ED0CE5 Received: from mail-qv1-f44.google.com (mail-qv1-f44.google.com [209.85.219.44]) by imf16.hostedemail.com (Postfix) with ESMTP id 1543F180020 for ; Mon, 11 Sep 2023 19:50:39 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=aIj6uRQu; spf=pass (imf16.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.44 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694461840; a=rsa-sha256; cv=none; b=MHv6zLdQgFsN0NUFYmVlPfGtT8WtLuTwYw+DONSfWgNrx9B/DbOCW/hmSP9ZzG/jOrwlH5 BD5GHb2EJeyXy2dNpJOCj6keMhOAvB5fM8kXLeUzEJ1n7FoVfOqZxxQSP2ISRAY3v0D3lN TYQ+5j1zKR/PeJVZyxoRfRxV1l4BTKA= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=aIj6uRQu; spf=pass (imf16.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.44 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694461840; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cFKNlfdAcUF/YBe6AFZW8MCMiyqnrgJqmqYsuJYCIEg=; b=0HT6wT7pYYGE63fHxedAkVtL59dQoXfczmp9qeC7g3uZXeTY5bYbpkVe6iMjhsfrKNJWyj z0DRbQckf6+yhq9DpZOHxRKAnZx6MPSPKPOqpjQH5ToR2HV6L6NYIG5IbbRtH7Zl/nvbO2 bFPb3q+QdhKhtFbEPQvzIygwgcAgLs4= Received: by mail-qv1-f44.google.com with SMTP id 6a1803df08f44-649921ec030so27190636d6.1 for ; Mon, 11 Sep 2023 12:50:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1694461839; x=1695066639; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cFKNlfdAcUF/YBe6AFZW8MCMiyqnrgJqmqYsuJYCIEg=; b=aIj6uRQuo3IZwtDp9r8vh/tqKbmzMteSznAsg8VLutnyLPDZhvwQ+tV4cczJVc2eLO b9marsjPn+6OT1MuQv6R41EziRpPqMjqk5mkLY9Qxy0PXhMKygCwzY4euqQBeSKIYNVo VWhEGiT0jUglnkCdmdzeV9wYwKecUszBY8ZrcjfW3epJ/kE6Ms5RxI3w77htrmObhHGY xNsw9CSEo0YoQH7ztzRBhausHUUqyiuyV2jMKVh6b7uMEC/obSCx8ALr/KUvMVuM6Per ehj8kpOBJPegtoHtJAr9tQ899kn4ite67UjljI7q49PqUwwhnjn9HbOQgJRaZIUiyAUj y9HA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694461839; x=1695066639; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cFKNlfdAcUF/YBe6AFZW8MCMiyqnrgJqmqYsuJYCIEg=; b=iXnMHQeGF34MJG9fgBv2BzhJ3Egkz1kXmnygObXuKkHwUE5RJpgfmbSu4JoT16x//3 THdckBxSAqL50Yn2G2aIle7KUeFSDpxr8+mJhLRGBzos3g4heF7Vwym48LCLWD1nOQmk 7zyBQxY3isPZhekMnTBy3iQmvlMDtSIWVnnFmUYlXrZX7Whk0Zy4F/dj0CFKReoc4KGK Y/qltrOZ038jq8/8WTyG5EnWpaNwxc0j/osPoK5RWGFCS06Br5jnVlCgErfPbxj/rcZa nK5CQJuMsh/NKVCkVFRi+mrMFgAuSm75qyDxhyCkWexCwStRM46MbIjnMjyM7bik/89+ 57JQ== X-Gm-Message-State: AOJu0YydgdRc7p5WhAyPo8MNG5FJSO+eqMBA9y41GVhI3WBLVhA/OY3u GPl5rvJBh5HclKh1ERKKryve7d1TH6NrMIY/Mws= X-Google-Smtp-Source: AGHT+IG6zxV3JjIHBg9m8nfMZE9qw+xm/r+Tycsfv0sTed00Bp/idpnNFj9CfKwZpEhUEgfXYmoHtA== X-Received: by 2002:a0c:b447:0:b0:64f:92d2:44f8 with SMTP id e7-20020a0cb447000000b0064f92d244f8mr8876365qvf.59.1694461839179; Mon, 11 Sep 2023 12:50:39 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-3012-16a2-6bc2-2937.res6.spectrum.com. [2603:7000:c01:2716:3012:16a2:6bc2:2937]) by smtp.gmail.com with ESMTPSA id f3-20020a0cf3c3000000b0064743dd0633sm3085730qvm.106.2023.09.11.12.50.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Sep 2023 12:50:38 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Miaohe Lin , Kefeng Wang , Zi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 6/6] mm: page_alloc: consolidate free page accounting Date: Mon, 11 Sep 2023 15:41:47 -0400 Message-ID: <20230911195023.247694-7-hannes@cmpxchg.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230911195023.247694-1-hannes@cmpxchg.org> References: <20230911195023.247694-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 1543F180020 X-Stat-Signature: p1zmdhmoai16cc4bgwf9xub1we9giwy5 X-Rspam-User: X-HE-Tag: 1694461839-757754 X-HE-Meta: U2FsdGVkX19yac5tOdrebeYhw/rNJ5erUDf78iljvt6ix+7tl9XmMMnnnnkp/QKV+k2UOgvuJj1GUskk2cY68V22wIMJG8+rqAe5NDc73AQ6uOOXBAGLqJ33wfqjG6LZyQeaU1E1nkj0vbHJ9ylKPeU3B/cUDNDdjQ+Z0zd9iKNDNRp0UeiHD2Z6vdIj/DQPG45AThSEiLHzsy/NShSOYizAbuzNTNdpTyHJ8vTcpDTcs9q2v//xzPDD4PfyagvCf/epHu0946gWqKtwKu4NkSc/oaRyfawv4T6969QrKNhwOWAFtOrH48R2QkdygMF9yX0IhbVMg5wzTqJd/8ANQ0ertquwI2H17OP+UenrFVl2+fTn0v5al0Qa0seUmc0ATzlpvysZeJBEcLv/1uJ0ntGtXIs9Xp7QfLslH65f/Nyt30m6a6aO3WHrCVJJVA1OZY2JIq6dCf3JfRUW40fJAl055SDNNID1wIkypGXm2qM65xZ+o26WE4T/mSjMFsoKsMEBCfRo+ANdlxydBuAKugVfXHZuaCnz8peKeCXjKC0TzhAIHFGglxIcqvrkZgpCTIplRkaK9XBxgjOTrgjagf3ngWe70oVWObQCsxfzSwyCRjXa3auCpCr/JtwwaLieF6vh6SwxxGpvZhX1RrAbZUarPleRdikBYdE4dRdWcUBuLYM8Bc9XCkNwjgUxYPHMXXzocSZJYv/DUfUEqwgWtD5e2L8/h4wum//Ua2azKsj/885kNmWthWYIKcLVY8+nqy3ud3G2fCQCSXjBlGxZZVG8cib3nf4ExQz78IsiY42Dl4VndcmbpQ84APtLvRfLkHhEGXyqfkwgWUo2ijRBB4mz2+NVqnXt94gvAIj+TDkCbcTzV+H0Cm7pi3z2PtWcwLDmdF3h30jntjJ0wAw6wOSL8xZqcJ/4XX9JFqJjO84EYFAtVr2WOTpeOsgi23lwLK6PvGXMPjNDAxrBL6a SEf0VZAr TynNizNabzs8GCHsrD9JVEEwKxuuB80975y4LoTOLsG1f+mSqu6KqMm6BQmUN4db2XvdZKRqF5TvQzbl3LS5EGwmPFkfd5qVOQROgj7+d9oEPOpJJwd4tUXt+2YQ29ZQX2CyXjVcqIehCMJyJOGdc0SAEgIZxn31gK7bVarA9DPU0kYGJG0Ebe9k1Vj7d8pFkiTl8pramZhcApRUvO/01ANjm04+UNntpKJPzxvzP1afbp8qjoOK3WrfYN0Fgnf0ojh3tLOl9CBzhKH4Gv3uBSM0ye5n1uz0KQUoG4NX2/pP3SFBBRIwO+fxbtWcJH+kM3QWTYq8HMcEN8Yr8u19tRV84iULXwnhwfkP+XJRvquOQ6pdQKYMz1pYzI1MbAK0Ja5apS9PSs2DG7PulmzI5PG0jn90AoaICBy5Mk5kPzeDGgxmpHtMkca2pmkvM8pLutHQ80hsgDLSqABAhlANWLrdoYOAy6mYs7guCOoMLHYSpa2Q= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Free page accounting currently happens a bit too high up the call stack, where it has to deal with guard pages, compaction capturing, block stealing and even page isolation. This is subtle and fragile, and makes it difficult to hack on the code. Now that type violations on the freelists have been fixed, push the accounting down to where pages enter and leave the freelist. v3: - fix CONFIG_UNACCEPTED_MEMORY build (lkp) v2: - fix CONFIG_DEBUG_PAGEALLOC build (Mel) Signed-off-by: Johannes Weiner --- include/linux/mm.h | 18 ++--- include/linux/page-isolation.h | 3 +- include/linux/vmstat.h | 8 -- mm/debug_page_alloc.c | 12 +-- mm/internal.h | 5 -- mm/page_alloc.c | 135 ++++++++++++++++++--------------- mm/page_isolation.c | 7 +- 7 files changed, 90 insertions(+), 98 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index bf5d0b1b16f4..d8698248f280 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3680,24 +3680,22 @@ static inline bool page_is_guard(struct page *page) return PageGuard(page); } -bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order, - int migratetype); +bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order); static inline bool set_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) + unsigned int order) { if (!debug_guardpage_enabled()) return false; - return __set_page_guard(zone, page, order, migratetype); + return __set_page_guard(zone, page, order); } -void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order, - int migratetype); +void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order); static inline void clear_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) + unsigned int order) { if (!debug_guardpage_enabled()) return; - __clear_page_guard(zone, page, order, migratetype); + __clear_page_guard(zone, page, order); } #else /* CONFIG_DEBUG_PAGEALLOC */ @@ -3707,9 +3705,9 @@ static inline unsigned int debug_guardpage_minorder(void) { return 0; } static inline bool debug_guardpage_enabled(void) { return false; } static inline bool page_is_guard(struct page *page) { return false; } static inline bool set_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) { return false; } + unsigned int order) { return false; } static inline void clear_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) {} + unsigned int order) {} #endif /* CONFIG_DEBUG_PAGEALLOC */ #ifdef __HAVE_ARCH_GATE_AREA diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 8550b3c91480..901915747960 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -34,7 +34,8 @@ static inline bool is_migrate_isolate(int migratetype) #define REPORT_FAILURE 0x2 void set_pageblock_migratetype(struct page *page, int migratetype); -int move_freepages_block(struct zone *zone, struct page *page, int migratetype); +int move_freepages_block(struct zone *zone, struct page *page, + int old_mt, int new_mt); int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, int migratetype, int flags, gfp_t gfp_flags); diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index fed855bae6d8..a4eae03f6094 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -487,14 +487,6 @@ static inline void node_stat_sub_folio(struct folio *folio, mod_node_page_state(folio_pgdat(folio), item, -folio_nr_pages(folio)); } -static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages, - int migratetype) -{ - __mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages); - if (is_migrate_cma(migratetype)) - __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages); -} - extern const char * const vmstat_text[]; static inline const char *zone_stat_name(enum zone_stat_item item) diff --git a/mm/debug_page_alloc.c b/mm/debug_page_alloc.c index f9d145730fd1..03a810927d0a 100644 --- a/mm/debug_page_alloc.c +++ b/mm/debug_page_alloc.c @@ -32,8 +32,7 @@ static int __init debug_guardpage_minorder_setup(char *buf) } early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup); -bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order, - int migratetype) +bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order) { if (order >= debug_guardpage_minorder()) return false; @@ -41,19 +40,12 @@ bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order, __SetPageGuard(page); INIT_LIST_HEAD(&page->buddy_list); set_page_private(page, order); - /* Guard pages are not available for any usage */ - if (!is_migrate_isolate(migratetype)) - __mod_zone_freepage_state(zone, -(1 << order), migratetype); return true; } -void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order, - int migratetype) +void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order) { __ClearPageGuard(page); - set_page_private(page, 0); - if (!is_migrate_isolate(migratetype)) - __mod_zone_freepage_state(zone, (1 << order), migratetype); } diff --git a/mm/internal.h b/mm/internal.h index 30cf724ddbce..d53b70e9cc3a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -883,11 +883,6 @@ static inline bool is_migrate_highatomic(enum migratetype migratetype) return migratetype == MIGRATE_HIGHATOMIC; } -static inline bool is_migrate_highatomic_page(struct page *page) -{ - return get_pageblock_migratetype(page) == MIGRATE_HIGHATOMIC; -} - void setup_zone_pageset(struct zone *zone); struct migration_target_control { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a902593f16dd..bfede72251d9 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -640,24 +640,36 @@ compaction_capture(struct capture_control *capc, struct page *page, } #endif /* CONFIG_COMPACTION */ -/* Used for pages not on another list */ -static inline void add_to_free_list(struct page *page, struct zone *zone, - unsigned int order, int migratetype) +static inline void account_freepages(struct page *page, struct zone *zone, + int nr_pages, int migratetype) { - struct free_area *area = &zone->free_area[order]; + if (is_migrate_isolate(migratetype)) + return; - list_add(&page->buddy_list, &area->free_list[migratetype]); - area->nr_free++; + __mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages); + + if (is_migrate_cma(migratetype)) + __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages); } /* Used for pages not on another list */ -static inline void add_to_free_list_tail(struct page *page, struct zone *zone, - unsigned int order, int migratetype) +static inline void add_to_free_list(struct page *page, struct zone *zone, + unsigned int order, int migratetype, + bool tail) { struct free_area *area = &zone->free_area[order]; - list_add_tail(&page->buddy_list, &area->free_list[migratetype]); + VM_WARN_ONCE(get_pageblock_migratetype(page) != migratetype, + "page type is %lu, passed migratetype is %d (nr=%d)\n", + get_pageblock_migratetype(page), migratetype, 1 << order); + + if (tail) + list_add_tail(&page->buddy_list, &area->free_list[migratetype]); + else + list_add(&page->buddy_list, &area->free_list[migratetype]); area->nr_free++; + + account_freepages(page, zone, 1 << order, migratetype); } /* @@ -666,16 +678,28 @@ static inline void add_to_free_list_tail(struct page *page, struct zone *zone, * allocation again (e.g., optimization for memory onlining). */ static inline void move_to_free_list(struct page *page, struct zone *zone, - unsigned int order, int migratetype) + unsigned int order, int old_mt, int new_mt) { struct free_area *area = &zone->free_area[order]; - list_move_tail(&page->buddy_list, &area->free_list[migratetype]); + /* Free page moving can fail, so it happens before the type update */ + VM_WARN_ONCE(get_pageblock_migratetype(page) != old_mt, + "page type is %lu, passed migratetype is %d (nr=%d)\n", + get_pageblock_migratetype(page), old_mt, 1 << order); + + list_move_tail(&page->buddy_list, &area->free_list[new_mt]); + + account_freepages(page, zone, -(1 << order), old_mt); + account_freepages(page, zone, 1 << order, new_mt); } static inline void del_page_from_free_list(struct page *page, struct zone *zone, - unsigned int order) + unsigned int order, int migratetype) { + VM_WARN_ONCE(get_pageblock_migratetype(page) != migratetype, + "page type is %lu, passed migratetype is %d (nr=%d)\n", + get_pageblock_migratetype(page), migratetype, 1 << order); + /* clear reported state and update reported page count */ if (page_reported(page)) __ClearPageReported(page); @@ -684,6 +708,8 @@ static inline void del_page_from_free_list(struct page *page, struct zone *zone, __ClearPageBuddy(page); set_page_private(page, 0); zone->free_area[order].nr_free--; + + account_freepages(page, zone, -(1 << order), migratetype); } static inline struct page *get_page_from_free_area(struct free_area *area, @@ -757,23 +783,21 @@ static inline void __free_one_page(struct page *page, VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); VM_BUG_ON(migratetype == -1); - if (likely(!is_migrate_isolate(migratetype))) - __mod_zone_freepage_state(zone, 1 << order, migratetype); - VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page); VM_BUG_ON_PAGE(bad_range(zone, page), page); while (order < MAX_ORDER) { - if (compaction_capture(capc, page, order, migratetype)) { - __mod_zone_freepage_state(zone, -(1 << order), - migratetype); + int buddy_mt; + + if (compaction_capture(capc, page, order, migratetype)) return; - } buddy = find_buddy_page_pfn(page, pfn, order, &buddy_pfn); if (!buddy) goto done_merging; + buddy_mt = get_pfnblock_migratetype(buddy, buddy_pfn); + if (unlikely(order >= pageblock_order)) { /* * We want to prevent merge between freepages on pageblock @@ -801,9 +825,9 @@ static inline void __free_one_page(struct page *page, * merge with it and move up one order. */ if (page_is_guard(buddy)) - clear_page_guard(zone, buddy, order, migratetype); + clear_page_guard(zone, buddy, order); else - del_page_from_free_list(buddy, zone, order); + del_page_from_free_list(buddy, zone, order, buddy_mt); combined_pfn = buddy_pfn & pfn; page = page + (combined_pfn - pfn); pfn = combined_pfn; @@ -820,10 +844,7 @@ static inline void __free_one_page(struct page *page, else to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order); - if (to_tail) - add_to_free_list_tail(page, zone, order, migratetype); - else - add_to_free_list(page, zone, order, migratetype); + add_to_free_list(page, zone, order, migratetype, to_tail); /* Notify page reporting subsystem of freed page */ if (!(fpi_flags & FPI_SKIP_REPORT_NOTIFY)) @@ -865,10 +886,8 @@ int split_free_page(struct page *free_page, } mt = get_pfnblock_migratetype(free_page, free_page_pfn); - if (likely(!is_migrate_isolate(mt))) - __mod_zone_freepage_state(zone, -(1UL << order), mt); + del_page_from_free_list(free_page, zone, order, mt); - del_page_from_free_list(free_page, zone, order); for (pfn = free_page_pfn; pfn < free_page_pfn + (1UL << order);) { int mt = get_pfnblock_migratetype(pfn_to_page(pfn), pfn); @@ -1388,10 +1407,10 @@ static inline void expand(struct zone *zone, struct page *page, * Corresponding page table entries will not be touched, * pages will stay not present in virtual address space */ - if (set_page_guard(zone, &page[size], high, migratetype)) + if (set_page_guard(zone, &page[size], high)) continue; - add_to_free_list(&page[size], zone, high, migratetype); + add_to_free_list(&page[size], zone, high, migratetype, false); set_buddy_order(&page[size], high); } } @@ -1561,7 +1580,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, page = get_page_from_free_area(area, migratetype); if (!page) continue; - del_page_from_free_list(page, zone, current_order); + del_page_from_free_list(page, zone, current_order, migratetype); expand(zone, page, order, current_order, migratetype); trace_mm_page_alloc_zone_locked(page, order, migratetype, pcp_allowed_order(order) && @@ -1602,7 +1621,7 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone, * boundary. If alignment is required, use move_freepages_block() */ static int move_freepages(struct zone *zone, unsigned long start_pfn, - unsigned long end_pfn, int migratetype) + unsigned long end_pfn, int old_mt, int new_mt) { struct page *page; unsigned long pfn; @@ -1621,7 +1640,7 @@ static int move_freepages(struct zone *zone, unsigned long start_pfn, VM_BUG_ON_PAGE(page_zone(page) != zone, page); order = buddy_order(page); - move_to_free_list(page, zone, order, migratetype); + move_to_free_list(page, zone, order, old_mt, new_mt); pfn += 1 << order; pages_moved += 1 << order; } @@ -1676,7 +1695,7 @@ static bool prep_move_freepages_block(struct zone *zone, struct page *page, } int move_freepages_block(struct zone *zone, struct page *page, - int migratetype) + int old_mt, int new_mt) { unsigned long start_pfn, end_pfn; @@ -1684,7 +1703,7 @@ int move_freepages_block(struct zone *zone, struct page *page, NULL, NULL)) return -1; - return move_freepages(zone, start_pfn, end_pfn, migratetype); + return move_freepages(zone, start_pfn, end_pfn, old_mt, new_mt); } static void change_pageblock_range(struct page *pageblock_page, @@ -1795,7 +1814,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, /* Take ownership for orders >= pageblock_order */ if (current_order >= pageblock_order) { - del_page_from_free_list(page, zone, current_order); + del_page_from_free_list(page, zone, current_order, block_type); change_pageblock_range(page, current_order, start_type); expand(zone, page, order, current_order, start_type); return; @@ -1845,13 +1864,13 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, */ if (free_pages + alike_pages >= (1 << (pageblock_order-1)) || page_group_by_mobility_disabled) { - move_freepages(zone, start_pfn, end_pfn, start_type); + move_freepages(zone, start_pfn, end_pfn, block_type, start_type); set_pageblock_migratetype(page, start_type); block_type = start_type; } single_page: - del_page_from_free_list(page, zone, current_order); + del_page_from_free_list(page, zone, current_order, block_type); expand(zone, page, order, current_order, block_type); } @@ -1916,7 +1935,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone) mt = get_pageblock_migratetype(page); /* Only reserve normal pageblocks (i.e., they can merge with others) */ if (migratetype_is_mergeable(mt)) { - if (move_freepages_block(zone, page, MIGRATE_HIGHATOMIC) != -1) { + if (move_freepages_block(zone, page, + mt, MIGRATE_HIGHATOMIC) != -1) { set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); zone->nr_reserved_highatomic += pageblock_nr_pages; } @@ -1959,11 +1979,13 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, spin_lock_irqsave(&zone->lock, flags); for (order = 0; order <= MAX_ORDER; order++) { struct free_area *area = &(zone->free_area[order]); + int mt; page = get_page_from_free_area(area, MIGRATE_HIGHATOMIC); if (!page) continue; + mt = get_pageblock_migratetype(page); /* * In page freeing path, migratetype change is racy so * we can counter several free pages in a pageblock @@ -1971,7 +1993,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, * from highatomic to ac->migratetype. So we should * adjust the count once. */ - if (is_migrate_highatomic_page(page)) { + if (is_migrate_highatomic(mt)) { /* * It should never happen but changes to * locking could inadvertently allow a per-cpu @@ -1993,7 +2015,8 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, * of pageblocks that cannot be completely freed * may increase. */ - ret = move_freepages_block(zone, page, ac->migratetype); + ret = move_freepages_block(zone, page, mt, + ac->migratetype); /* * Reserving this block already succeeded, so this should * not fail on zone boundaries. @@ -2165,12 +2188,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, * pages are ordered properly. */ list_add_tail(&page->pcp_list, list); - if (is_migrate_cma(get_pageblock_migratetype(page))) - __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, - -(1 << order)); } - - __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order)); spin_unlock_irqrestore(&zone->lock, flags); return i; @@ -2565,11 +2583,9 @@ int __isolate_free_page(struct page *page, unsigned int order) watermark = zone->_watermark[WMARK_MIN] + (1UL << order); if (!zone_watermark_ok(zone, 0, watermark, 0, ALLOC_CMA)) return 0; - - __mod_zone_freepage_state(zone, -(1UL << order), mt); } - del_page_from_free_list(page, zone, order); + del_page_from_free_list(page, zone, order, mt); /* * Set the pageblock if the isolated page is at least half of a @@ -2584,7 +2600,7 @@ int __isolate_free_page(struct page *page, unsigned int order) * with others) */ if (migratetype_is_mergeable(mt) && - move_freepages_block(zone, page, + move_freepages_block(zone, page, mt, MIGRATE_MOVABLE) != -1) set_pageblock_migratetype(page, MIGRATE_MOVABLE); } @@ -2670,8 +2686,6 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, return NULL; } } - __mod_zone_freepage_state(zone, -(1 << order), - get_pageblock_migratetype(page)); spin_unlock_irqrestore(&zone->lock, flags); } while (check_new_pages(page, order)); @@ -6434,8 +6448,9 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn) BUG_ON(page_count(page)); BUG_ON(!PageBuddy(page)); + VM_WARN_ON(get_pageblock_migratetype(page) != MIGRATE_ISOLATE); order = buddy_order(page); - del_page_from_free_list(page, zone, order); + del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE); pfn += (1 << order); } spin_unlock_irqrestore(&zone->lock, flags); @@ -6486,11 +6501,12 @@ static void break_down_buddy_pages(struct zone *zone, struct page *page, current_buddy = page + size; } - if (set_page_guard(zone, current_buddy, high, migratetype)) + if (set_page_guard(zone, current_buddy, high)) continue; if (current_buddy != target) { - add_to_free_list(current_buddy, zone, high, migratetype); + add_to_free_list(current_buddy, zone, high, + migratetype, false); set_buddy_order(current_buddy, high); page = next_page; } @@ -6518,12 +6534,11 @@ bool take_page_off_buddy(struct page *page) int migratetype = get_pfnblock_migratetype(page_head, pfn_head); - del_page_from_free_list(page_head, zone, page_order); + del_page_from_free_list(page_head, zone, page_order, + migratetype); break_down_buddy_pages(zone, page_head, page, 0, page_order, migratetype); SetPageHWPoisonTakenOff(page); - if (!is_migrate_isolate(migratetype)) - __mod_zone_freepage_state(zone, -1, migratetype); ret = true; break; } @@ -6630,7 +6645,7 @@ static bool try_to_accept_memory_one(struct zone *zone) list_del(&page->lru); last = list_empty(&zone->unaccepted_pages); - __mod_zone_freepage_state(zone, -MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE); + account_freepages(page, zone, -MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE); __mod_zone_page_state(zone, NR_UNACCEPTED, -MAX_ORDER_NR_PAGES); spin_unlock_irqrestore(&zone->lock, flags); @@ -6682,7 +6697,7 @@ static bool __free_unaccepted(struct page *page) spin_lock_irqsave(&zone->lock, flags); first = list_empty(&zone->unaccepted_pages); list_add_tail(&page->lru, &zone->unaccepted_pages); - __mod_zone_freepage_state(zone, MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE); + account_freepages(page, zone, MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE); __mod_zone_page_state(zone, NR_UNACCEPTED, MAX_ORDER_NR_PAGES); spin_unlock_irqrestore(&zone->lock, flags); diff --git a/mm/page_isolation.c b/mm/page_isolation.c index cc48a3a52f00..b5c7a9d21257 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -181,13 +181,12 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ int nr_pages; int mt = get_pageblock_migratetype(page); - nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE); + nr_pages = move_freepages_block(zone, page, mt, MIGRATE_ISOLATE); /* Block spans zone boundaries? */ if (nr_pages == -1) { spin_unlock_irqrestore(&zone->lock, flags); return -EBUSY; } - __mod_zone_freepage_state(zone, -nr_pages, mt); set_pageblock_migratetype(page, MIGRATE_ISOLATE); zone->nr_isolate_pageblock++; spin_unlock_irqrestore(&zone->lock, flags); @@ -255,13 +254,13 @@ static void unset_migratetype_isolate(struct page *page, int migratetype) * allocation. */ if (!isolated_page) { - int nr_pages = move_freepages_block(zone, page, migratetype); + int nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE, + migratetype); /* * Isolating this block already succeeded, so this * should not fail on zone boundaries. */ WARN_ON_ONCE(nr_pages == -1); - __mod_zone_freepage_state(zone, nr_pages, migratetype); } set_pageblock_migratetype(page, migratetype); if (isolated_page)