From patchwork Wed Mar 20 02:42:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kaiyang Zhao X-Patchwork-Id: 13597221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 685FAC54E71 for ; Wed, 20 Mar 2024 02:42:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B7B96B0089; Tue, 19 Mar 2024 22:42:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4690D6B008A; Tue, 19 Mar 2024 22:42:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 32EE16B008C; Tue, 19 Mar 2024 22:42:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 22F966B0089 for ; Tue, 19 Mar 2024 22:42:26 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A2BCA1A087E for ; Wed, 20 Mar 2024 02:42:25 +0000 (UTC) X-FDA: 81915868650.12.F7B2958 Received: from mail-ot1-f53.google.com (mail-ot1-f53.google.com [209.85.210.53]) by imf24.hostedemail.com (Postfix) with ESMTP id CDECF18000C for ; Wed, 20 Mar 2024 02:42:23 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=cs.cmu.edu header.s=google-2021 header.b=hLelrzc+; dmarc=pass (policy=none) header.from=cs.cmu.edu; spf=pass (imf24.hostedemail.com: domain of kaiyang2@andrew.cmu.edu designates 209.85.210.53 as permitted sender) smtp.mailfrom=kaiyang2@andrew.cmu.edu ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710902543; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UljsMw4YXxvG2im1vY4a6XAU2ZaTJQdUEua3lUBpIog=; b=IL45OSXHI1F6u+iwRRMn8p67+Dg0K5/+SMTdLsJpAcj6c4lRyKDCJoyq/r4a0/Z3D+uFZB MAQvvIF+KGdgOgcTB7WfetkGdmjvIxepouYcWuXRXjP9GqdhtZMvYKtAahdVUs+pgwJNTT tsgMz/tZ4bdIO0bo9OMVXhg+HCrOVhU= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=cs.cmu.edu header.s=google-2021 header.b=hLelrzc+; dmarc=pass (policy=none) header.from=cs.cmu.edu; spf=pass (imf24.hostedemail.com: domain of kaiyang2@andrew.cmu.edu designates 209.85.210.53 as permitted sender) smtp.mailfrom=kaiyang2@andrew.cmu.edu ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710902543; a=rsa-sha256; cv=none; b=k+NUnBFdCWOKPofEbIqHFhyiEiWW3/+7z5JtRhzSlB3P50/67HUzg9JRBJ8vWL98wjlwAH +i01y+OAw3niT/rB6zO7cUeUG6akYZPHtKzD+qkUbv0aq9PoAIv3FR8P61JHeuC3tPJfBC 1SKJ9d3FsBYsmwIiBUiARNw09la0CGU= Received: by mail-ot1-f53.google.com with SMTP id 46e09a7af769-6e674136ba4so3980893a34.2 for ; Tue, 19 Mar 2024 19:42:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cs.cmu.edu; s=google-2021; t=1710902542; x=1711507342; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=UljsMw4YXxvG2im1vY4a6XAU2ZaTJQdUEua3lUBpIog=; b=hLelrzc+vRZzC6564Kt/GIDw9tX8SKOAlvo7HWM8vFHgIlX41Q5901vuo7MKpku+2r vKJTESaRXrWYFcDFZsUqPLX118iMgqBGMkAzN3gIDvPlUJAwJXJ3rRDxs7TCUad7tkJz +Bb5n79MUbu9KNyaSfYpyxqadfzJdUuMb7sy/jVWKI7SffeNPM5CFUpovS28b600xUXc FsJ9rk1AbenvN4CSCw7SuFT1zNBpkC0gKPIeQnMBTkbgNs0Z71EEIZITx92u3oNXdcWP W9s9/pJ9Cd6BP/zp8mZAbDnoov9ThoBVqmQ8gJHPR5ZsgWKDnXgs+7JFav9tJ1kFVtNa LIUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710902542; x=1711507342; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=UljsMw4YXxvG2im1vY4a6XAU2ZaTJQdUEua3lUBpIog=; b=XTsXyBUQ1xLGQueW33iHj85xOIXS7BmDv5B5p5cza04CLJpha5W8JxB9bzalHfPwxD hJMg8ZUrqcYCYarkLBhMRQGiCFKLuC+jWPjvw7nlqRVTiGHsz6uAKTO5zeIzBnSHoCde IznqcSE9NShsOAFg0zMT6shIJxhQof2WnLd5Js9X/dquTJzwr70kV+ETPnraam/dvbIJ v1rYlnyDomMGPppxpSOy55tEcwarKSxPywUPGzLdaJGZa+o+SXFhaozSW11EG+F3c6+4 k+EVRkgQDQoLwT8sfqjKK5iYrcEPL1nAPnl3cHQhpsbUm2IHwOVeURrV31UvVFGRsK3t Cytg== X-Gm-Message-State: AOJu0YxXr51IAiRp24qBsvdM35nmteDdXy4T66OjO9YkFN+5P47Ww7Vt 0/BF96+l5mZEVqbVDu14ybS2swaIQyc45iW/DMD90qwu8pR5XNx/YbNMU8EsuoHe9+UmHqaWPjb 05wMDZLF+ZvxzzzxLypNS8gRcECb2PCykYfuhH1atnQYf4y+jbUvxAgHZUJurh5F2tlg3SFwhbZ jxARWhjDTuXpm4EmSvVNXuz5wgLCUvd9n5m3Q= X-Google-Smtp-Source: AGHT+IESptNQKDLD8MOe2hQxpKnyZd4H1raJLsnnYtjWGvQGBbhnaHWmiZ6IFtzpyT4LI3UuB+xfsg== X-Received: by 2002:a9d:74c5:0:b0:6e4:f961:e444 with SMTP id a5-20020a9d74c5000000b006e4f961e444mr5504940otl.4.1710902542503; Tue, 19 Mar 2024 19:42:22 -0700 (PDT) Received: from localhost (pool-74-98-221-57.pitbpa.fios.verizon.net. [74.98.221.57]) by smtp.gmail.com with UTF8SMTPSA id y27-20020a05620a09db00b00789f0d9e6dcsm3560797qky.93.2024.03.19.19.42.22 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 19 Mar 2024 19:42:22 -0700 (PDT) From: kaiyang2@cs.cmu.edu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Kaiyang Zhao , hannes@cmpxchg.org, ziy@nvidia.com, dskarlat@cs.cmu.edu Subject: [RFC PATCH 3/7] compaction accepts a destination zone Date: Wed, 20 Mar 2024 02:42:14 +0000 Message-Id: <20240320024218.203491-4-kaiyang2@cs.cmu.edu> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240320024218.203491-1-kaiyang2@cs.cmu.edu> References: <20240320024218.203491-1-kaiyang2@cs.cmu.edu> Reply-To: Kaiyang Zhao MIME-Version: 1.0 X-Rspamd-Queue-Id: CDECF18000C X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: d89wcmfpedi7c3uof9sho6c9dmrnpg6e X-HE-Tag: 1710902543-463704 X-HE-Meta: U2FsdGVkX1//oyUnEVtOSCEHMdbuqNqW71+j9QMv77oU8frk2pAJ3yWKHbFiPpI/cq+BzfPgD9NF6zSRE0vinQIGqDDG94JKxfaGZGKjkfEFcllJSRXwdVBTorkZ0Jc5cSFWXp6sv9sCyq40nCm1K0Nl+p3G41QzG6mrqZBmEByN+2Bb41zS7EQhASo6iuN+fyJNNXe54Ny95FRbdvUDn85ZWV2W2WGkSSpbXcO+qLi4yoyax5tt62LLyfXohhX8Azz2ty2oTy3ACUwiLorpei1OzkcE8493tQoUPFcwfNNrfN6h2i2gR5ajSgRSZLM1P0lh8URcftfGnb3L/RTf4zCNYxmAMrLMIBTFTAKHOzv5yB5Oi4cCttXNk69jzIeE+d4RSXh2khO+yHoxgPHa50VpBD+fRIyG4mvz67htc9GMydflKU/nEo6E67Ry+/GBQ5HVfLozVKbMDWvMVQ1SlEYWcfHyQiSl+HEGUxOLeZlQbTMzFKG7+OoNpViB8TPG6Vsj6ffpkWUp63w9gDs1TOEZU1eodO9M4o/o+PRy3pdecAxzMS7OsEZlAypV5wDfRdU2J2Uv6tpLtGe0kZC2MFYJy/KYL3UjTfCY5dLVbop5aY4pne/awCzCN3ToQcBYFGMT3MXEp4QSPrl02i1z6oXGYoEyay6q266e50cKbolbUsLrkbaFaVdXeCJyzwdsihVyZOdmwfyaL01RWXvKbgZFkmYhp6vfg4+6aZ6jM55/y+mZJs2/cnpl29DajXnu0OyZbG13Xcwn8vhH1Z9tZ2W9EtnqDB/jZUWIgXWuIRatrhbCm96ZyZ8wkpjwecr8N8WkDHXNie9l/Pwn9e7hi4xXQrHIsRx1cHQmx4rsw2u1FArgMb8t8BQk7eIUwAxbmDfaJTwDMypFy3NQlhttXnoBFCPMCMKJ5QTbVC/znTV5vkzWEo580Clzlymf3usW5En/zU519GB2lhGffh1 RJmIQlo/ 3lDNCRVW7j3fXCMyZKM+27GQutIu0iWOEu5l2ZJSRVrMVJ61ASERKtIn0Xacktwl30gJDoZM4jLVxGyNA6D4GIYsI8McFe41Geonx64kuwPGTHIspmcvAXCaskjB5RDrZ2mPLbxBTDlReEyO7AFniIdqa0m3t6lAYqmXOjlxMlTj/pykob49d5fpiBprqi70oh9bbXUu2paTsg/TvmnVILJd4tuTZXjknj/dT6j68w77KxJELbclV0qZwruejgk2bvRJevLb9uPBcerps9/X10kxtDkgfWf2YmtL/ZfY98ShCRqSVq3K2MUTY9+LXbxbRm0xxbOvfctwut1glWHyEZg0YG8kBHUgESxI/dSvsVXva7KwkqFF5hFDxLf+ttzRd5EPpCK18AJyVMbQGAlAe39Imri8JN334Da8Jb/7InPWcNsTyMgO0xLI8cTdMNB1vv34R+wOl/LDUx+YRvipaEyxjtC3oJsecXz3afhcvR6IeFaJ+XqNKqu57QB9iC9vQN2hYKR/tzHetQ38XH1wuOi38YQnnHgNdJlqDO1NIjSrvmegdLY4Zk5WqFePM6tUgYVCG X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kaiyang Zhao Distinguishes the source and destination zones in compaction Signed-off-by: Kaiyang Zhao --- include/linux/compaction.h | 4 +- mm/compaction.c | 106 +++++++++++++++++++++++-------------- mm/internal.h | 1 + mm/vmscan.c | 4 +- 4 files changed, 70 insertions(+), 45 deletions(-) diff --git a/include/linux/compaction.h b/include/linux/compaction.h index a6e512cfb670..11f5a1a83abb 100644 --- a/include/linux/compaction.h +++ b/include/linux/compaction.h @@ -90,7 +90,7 @@ extern enum compact_result try_to_compact_pages(gfp_t gfp_mask, struct page **page); extern void reset_isolation_suitable(pg_data_t *pgdat); extern enum compact_result compaction_suitable(struct zone *zone, int order, - unsigned int alloc_flags, int highest_zoneidx); + unsigned int alloc_flags, int highest_zoneidx, struct zone *dst_zone); extern void compaction_defer_reset(struct zone *zone, int order, bool alloc_success); @@ -180,7 +180,7 @@ static inline void reset_isolation_suitable(pg_data_t *pgdat) } static inline enum compact_result compaction_suitable(struct zone *zone, int order, - int alloc_flags, int highest_zoneidx) + int alloc_flags, int highest_zoneidx, struct zone *dst_zone) { return COMPACT_SKIPPED; } diff --git a/mm/compaction.c b/mm/compaction.c index c8bcdea15f5f..03b5c4debc17 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -435,7 +435,7 @@ static void update_cached_migrate(struct compact_control *cc, unsigned long pfn) static void update_pageblock_skip(struct compact_control *cc, struct page *page, unsigned long pfn) { - struct zone *zone = cc->zone; + struct zone *dst_zone = cc->dst_zone ? cc->dst_zone : cc->zone; if (cc->no_set_skip_hint) return; @@ -446,8 +446,8 @@ static void update_pageblock_skip(struct compact_control *cc, set_pageblock_skip(page); /* Update where async and sync compaction should restart */ - if (pfn < zone->compact_cached_free_pfn) - zone->compact_cached_free_pfn = pfn; + if (pfn < dst_zone->compact_cached_free_pfn) + dst_zone->compact_cached_free_pfn = pfn; } #else static inline bool isolation_suitable(struct compact_control *cc, @@ -550,6 +550,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, bool locked = false; unsigned long blockpfn = *start_pfn; unsigned int order; + struct zone *dst_zone = cc->dst_zone ? cc->dst_zone : cc->zone; /* Strict mode is for isolation, speed is secondary */ if (strict) @@ -568,7 +569,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, * pending. */ if (!(blockpfn % COMPACT_CLUSTER_MAX) - && compact_unlock_should_abort(&cc->zone->lock, flags, + && compact_unlock_should_abort(&dst_zone->lock, flags, &locked, cc)) break; @@ -596,7 +597,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, /* If we already hold the lock, we can skip some rechecking. */ if (!locked) { - locked = compact_lock_irqsave(&cc->zone->lock, + locked = compact_lock_irqsave(&dst_zone->lock, &flags, cc); /* Recheck this is a buddy page under lock */ @@ -634,7 +635,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, } if (locked) - spin_unlock_irqrestore(&cc->zone->lock, flags); + spin_unlock_irqrestore(&dst_zone->lock, flags); /* * There is a tiny chance that we have read bogus compound_order(), @@ -683,11 +684,12 @@ isolate_freepages_range(struct compact_control *cc, { unsigned long isolated, pfn, block_start_pfn, block_end_pfn; LIST_HEAD(freelist); + struct zone *dst_zone = cc->dst_zone ? cc->dst_zone : cc->zone; pfn = start_pfn; block_start_pfn = pageblock_start_pfn(pfn); - if (block_start_pfn < cc->zone->zone_start_pfn) - block_start_pfn = cc->zone->zone_start_pfn; + if (block_start_pfn < dst_zone->zone_start_pfn) + block_start_pfn = dst_zone->zone_start_pfn; block_end_pfn = pageblock_end_pfn(pfn); for (; pfn < end_pfn; pfn += isolated, @@ -710,7 +712,7 @@ isolate_freepages_range(struct compact_control *cc, } if (!pageblock_pfn_to_page(block_start_pfn, - block_end_pfn, cc->zone)) + block_end_pfn, dst_zone)) break; isolated = isolate_freepages_block(cc, &isolate_start_pfn, @@ -1359,6 +1361,7 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn) { unsigned long start_pfn, end_pfn; struct page *page; + struct zone *dst_zone = cc->dst_zone ? cc->dst_zone : cc->zone; /* Do not search around if there are enough pages already */ if (cc->nr_freepages >= cc->nr_migratepages) @@ -1369,10 +1372,10 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn) return; /* Pageblock boundaries */ - start_pfn = max(pageblock_start_pfn(pfn), cc->zone->zone_start_pfn); - end_pfn = min(pageblock_end_pfn(pfn), zone_end_pfn(cc->zone)); + start_pfn = max(pageblock_start_pfn(pfn), dst_zone->zone_start_pfn); + end_pfn = min(pageblock_end_pfn(pfn), zone_end_pfn(dst_zone)); - page = pageblock_pfn_to_page(start_pfn, end_pfn, cc->zone); + page = pageblock_pfn_to_page(start_pfn, end_pfn, dst_zone); if (!page) return; @@ -1414,6 +1417,7 @@ fast_isolate_freepages(struct compact_control *cc) struct page *page = NULL; bool scan_start = false; int order; + struct zone *dst_zone = cc->dst_zone ? cc->dst_zone : cc->zone; /* Full compaction passes in a negative order */ if (cc->order <= 0) @@ -1423,7 +1427,7 @@ fast_isolate_freepages(struct compact_control *cc) * If starting the scan, use a deeper search and use the highest * PFN found if a suitable one is not found. */ - if (cc->free_pfn >= cc->zone->compact_init_free_pfn) { + if (cc->free_pfn >= dst_zone->compact_init_free_pfn) { limit = pageblock_nr_pages >> 1; scan_start = true; } @@ -1448,7 +1452,7 @@ fast_isolate_freepages(struct compact_control *cc) for (order = cc->search_order; !page && order >= 0; order = next_search_order(cc, order)) { - struct free_area *area = &cc->zone->free_area[order]; + struct free_area *area = &dst_zone->free_area[order]; struct list_head *freelist; struct page *freepage; unsigned long flags; @@ -1458,7 +1462,7 @@ fast_isolate_freepages(struct compact_control *cc) if (!area->nr_free) continue; - spin_lock_irqsave(&cc->zone->lock, flags); + spin_lock_irqsave(&dst_zone->lock, flags); freelist = &area->free_list[MIGRATE_MOVABLE]; list_for_each_entry_reverse(freepage, freelist, lru) { unsigned long pfn; @@ -1469,7 +1473,7 @@ fast_isolate_freepages(struct compact_control *cc) if (pfn >= highest) highest = max(pageblock_start_pfn(pfn), - cc->zone->zone_start_pfn); + dst_zone->zone_start_pfn); if (pfn >= low_pfn) { cc->fast_search_fail = 0; @@ -1516,7 +1520,7 @@ fast_isolate_freepages(struct compact_control *cc) } } - spin_unlock_irqrestore(&cc->zone->lock, flags); + spin_unlock_irqrestore(&dst_zone->lock, flags); /* * Smaller scan on next order so the total scan is related @@ -1541,17 +1545,17 @@ fast_isolate_freepages(struct compact_control *cc) if (cc->direct_compaction && pfn_valid(min_pfn)) { page = pageblock_pfn_to_page(min_pfn, min(pageblock_end_pfn(min_pfn), - zone_end_pfn(cc->zone)), - cc->zone); + zone_end_pfn(dst_zone)), + dst_zone); cc->free_pfn = min_pfn; } } } } - if (highest && highest >= cc->zone->compact_cached_free_pfn) { + if (highest && highest >= dst_zone->compact_cached_free_pfn) { highest -= pageblock_nr_pages; - cc->zone->compact_cached_free_pfn = highest; + dst_zone->compact_cached_free_pfn = highest; } cc->total_free_scanned += nr_scanned; @@ -1569,7 +1573,7 @@ fast_isolate_freepages(struct compact_control *cc) */ static void isolate_freepages(struct compact_control *cc) { - struct zone *zone = cc->zone; + struct zone *zone = cc->dst_zone ? cc->dst_zone : cc->zone; struct page *page; unsigned long block_start_pfn; /* start of current pageblock */ unsigned long isolate_start_pfn; /* exact pfn we start at */ @@ -2089,11 +2093,19 @@ static enum compact_result __compact_finished(struct compact_control *cc) unsigned int order; const int migratetype = cc->migratetype; int ret; + struct zone *dst_zone = cc->dst_zone ? cc->dst_zone : cc->zone; - /* Compaction run completes if the migrate and free scanner meet */ - if (compact_scanners_met(cc)) { + /* + * Compaction run completes if the migrate and free scanner meet + * or when either the src or dst zone has been completely scanned + */ + if (compact_scanners_met(cc) || + cc->migrate_pfn >= zone_end_pfn(cc->zone) || + cc->free_pfn < dst_zone->zone_start_pfn) { /* Let the next compaction start anew. */ reset_cached_positions(cc->zone); + if (cc->dst_zone) + reset_cached_positions(cc->dst_zone); /* * Mark that the PG_migrate_skip information should be cleared @@ -2196,10 +2208,13 @@ static enum compact_result compact_finished(struct compact_control *cc) static enum compact_result __compaction_suitable(struct zone *zone, int order, unsigned int alloc_flags, int highest_zoneidx, - unsigned long wmark_target) + unsigned long wmark_target, struct zone *dst_zone) { unsigned long watermark; + if (!dst_zone) + dst_zone = zone; + if (is_via_compact_memory(order)) return COMPACT_CONTINUE; @@ -2227,9 +2242,9 @@ static enum compact_result __compaction_suitable(struct zone *zone, int order, * suitable migration targets */ watermark = (order > PAGE_ALLOC_COSTLY_ORDER) ? - low_wmark_pages(zone) : min_wmark_pages(zone); + low_wmark_pages(dst_zone) : min_wmark_pages(dst_zone); watermark += compact_gap(order); - if (!__zone_watermark_ok(zone, 0, watermark, highest_zoneidx, + if (!__zone_watermark_ok(dst_zone, 0, watermark, highest_zoneidx, ALLOC_CMA, wmark_target)) return COMPACT_SKIPPED; @@ -2245,13 +2260,16 @@ static enum compact_result __compaction_suitable(struct zone *zone, int order, */ enum compact_result compaction_suitable(struct zone *zone, int order, unsigned int alloc_flags, - int highest_zoneidx) + int highest_zoneidx, struct zone *dst_zone) { enum compact_result ret; int fragindex; + if (!dst_zone) + dst_zone = zone; + ret = __compaction_suitable(zone, order, alloc_flags, highest_zoneidx, - zone_page_state(zone, NR_FREE_PAGES)); + zone_page_state(dst_zone, NR_FREE_PAGES), dst_zone); /* * fragmentation index determines if allocation failures are due to * low memory or external fragmentation @@ -2305,7 +2323,7 @@ bool compaction_zonelist_suitable(struct alloc_context *ac, int order, available = zone_reclaimable_pages(zone) / order; available += zone_page_state_snapshot(zone, NR_FREE_PAGES); compact_result = __compaction_suitable(zone, order, alloc_flags, - ac->highest_zoneidx, available); + ac->highest_zoneidx, available, NULL); if (compact_result == COMPACT_CONTINUE) return true; } @@ -2317,8 +2335,9 @@ static enum compact_result compact_zone(struct compact_control *cc, struct capture_control *capc) { enum compact_result ret; + struct zone *dst_zone = cc->dst_zone ? cc->dst_zone : cc->zone; unsigned long start_pfn = cc->zone->zone_start_pfn; - unsigned long end_pfn = zone_end_pfn(cc->zone); + unsigned long end_pfn = zone_end_pfn(dst_zone); unsigned long last_migrated_pfn; const bool sync = cc->mode != MIGRATE_ASYNC; bool update_cached; @@ -2337,7 +2356,7 @@ compact_zone(struct compact_control *cc, struct capture_control *capc) cc->migratetype = gfp_migratetype(cc->gfp_mask); ret = compaction_suitable(cc->zone, cc->order, cc->alloc_flags, - cc->highest_zoneidx); + cc->highest_zoneidx, dst_zone); /* Compaction is likely to fail */ if (ret == COMPACT_SUCCESS || ret == COMPACT_SKIPPED) return ret; @@ -2346,14 +2365,19 @@ compact_zone(struct compact_control *cc, struct capture_control *capc) * Clear pageblock skip if there were failures recently and compaction * is about to be retried after being deferred. */ - if (compaction_restarting(cc->zone, cc->order)) + if (compaction_restarting(cc->zone, cc->order)) { __reset_isolation_suitable(cc->zone); + if (dst_zone != cc->zone) + __reset_isolation_suitable(dst_zone); + } /* * Setup to move all movable pages to the end of the zone. Used cached * information on where the scanners should start (unless we explicitly * want to compact the whole zone), but check that it is initialised * by ensuring the values are within zone boundaries. + * + * If a destination zone is provided, use it for free pages. */ cc->fast_start_pfn = 0; if (cc->whole_zone) { @@ -2361,12 +2385,12 @@ compact_zone(struct compact_control *cc, struct capture_control *capc) cc->free_pfn = pageblock_start_pfn(end_pfn - 1); } else { cc->migrate_pfn = cc->zone->compact_cached_migrate_pfn[sync]; - cc->free_pfn = cc->zone->compact_cached_free_pfn; - if (cc->free_pfn < start_pfn || cc->free_pfn >= end_pfn) { + cc->free_pfn = dst_zone->compact_cached_free_pfn; + if (cc->free_pfn < dst_zone->zone_start_pfn || cc->free_pfn >= end_pfn) { cc->free_pfn = pageblock_start_pfn(end_pfn - 1); - cc->zone->compact_cached_free_pfn = cc->free_pfn; + dst_zone->compact_cached_free_pfn = cc->free_pfn; } - if (cc->migrate_pfn < start_pfn || cc->migrate_pfn >= end_pfn) { + if (cc->migrate_pfn < start_pfn || cc->migrate_pfn >= zone_end_pfn(cc->zone)) { cc->migrate_pfn = start_pfn; cc->zone->compact_cached_migrate_pfn[0] = cc->migrate_pfn; cc->zone->compact_cached_migrate_pfn[1] = cc->migrate_pfn; @@ -2522,8 +2546,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc) * Only go back, not forward. The cached pfn might have been * already reset to zone end in compact_finished() */ - if (free_pfn > cc->zone->compact_cached_free_pfn) - cc->zone->compact_cached_free_pfn = free_pfn; + if (free_pfn > dst_zone->compact_cached_free_pfn) + dst_zone->compact_cached_free_pfn = free_pfn; } count_compact_events(COMPACTMIGRATE_SCANNED, cc->total_migrate_scanned); @@ -2834,7 +2858,7 @@ static bool kcompactd_node_suitable(pg_data_t *pgdat) continue; if (compaction_suitable(zone, pgdat->kcompactd_max_order, 0, - highest_zoneidx) == COMPACT_CONTINUE) + highest_zoneidx, NULL) == COMPACT_CONTINUE) return true; } @@ -2871,7 +2895,7 @@ static void kcompactd_do_work(pg_data_t *pgdat) if (compaction_deferred(zone, cc.order)) continue; - if (compaction_suitable(zone, cc.order, 0, zoneid) != + if (compaction_suitable(zone, cc.order, 0, zoneid, NULL) != COMPACT_CONTINUE) continue; diff --git a/mm/internal.h b/mm/internal.h index 68410c6d97ac..349223cc0359 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -465,6 +465,7 @@ struct compact_control { unsigned long migrate_pfn; unsigned long fast_start_pfn; /* a pfn to start linear scan from */ struct zone *zone; + struct zone *dst_zone; /* use another zone as the destination */ unsigned long total_migrate_scanned; unsigned long total_free_scanned; unsigned short fast_search_fail;/* failures to use free list searches */ diff --git a/mm/vmscan.c b/mm/vmscan.c index 5bf98d0a22c9..aa21da983804 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -6383,7 +6383,7 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat, if (!managed_zone(zone)) continue; - switch (compaction_suitable(zone, sc->order, 0, sc->reclaim_idx)) { + switch (compaction_suitable(zone, sc->order, 0, sc->reclaim_idx, NULL)) { case COMPACT_SUCCESS: case COMPACT_CONTINUE: return false; @@ -6580,7 +6580,7 @@ static inline bool compaction_ready(struct zone *zone, struct scan_control *sc) unsigned long watermark; enum compact_result suitable; - suitable = compaction_suitable(zone, sc->order, 0, sc->reclaim_idx); + suitable = compaction_suitable(zone, sc->order, 0, sc->reclaim_idx, NULL); if (suitable == COMPACT_SUCCESS) /* Allocation should succeed already. Don't reclaim. */ return true;