From patchwork Fri Dec 14 23:03:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 10731779 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C813A13AD for ; Fri, 14 Dec 2018 23:03:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7A5372D639 for ; Fri, 14 Dec 2018 23:03:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6E9BD2D63D; Fri, 14 Dec 2018 23:03:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AA9C22D639 for ; Fri, 14 Dec 2018 23:03:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6675C8E0218; Fri, 14 Dec 2018 18:03:14 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 640D68E0216; Fri, 14 Dec 2018 18:03:14 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49A328E021E; Fri, 14 Dec 2018 18:03:14 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com [209.85.208.69]) by kanga.kvack.org (Postfix) with ESMTP id CFD1D8E0216 for ; Fri, 14 Dec 2018 18:03:13 -0500 (EST) Received: by mail-ed1-f69.google.com with SMTP id w15so3435552edl.21 for ; Fri, 14 Dec 2018 15:03:13 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=LANCfU6JkeojD8p0hJc/ua7hJ3reHx8NxLuFWTot2E4=; b=cGrcNUwROPOXIMpKnneH/o7O312oblmPlR7rheBrvLqY3BaQ+GuE4VkaMbehQuN6zA F1z0ZxTPUgHpa7OGYWEyNjj8ABYNh9wsVJ8uIDWHcLKiWzXOd/eFDJqKApMhF4/ZbKoI 9Ve+g872SqPCdZ6UMUJozSTN26IfC31tuDGcXbjoan/iFjXCAyxcmZP82MmiRUJvjHte 6oQeVu8CyPAwsQ4Pg05V2SnVzzweYRCAn23ChJgCKcS/LR7Qn927zL5ekou5ZpX+al4n Ytj0Evgdh0pYs8p7hEeM63Z6F8QLGXd/FCd5EWuAqvfShNx59HCyyPJbqecHiBwQyY15 MBKg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 46.22.139.17 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Gm-Message-State: AA+aEWaiTfTKa/H48ibRAtCvAeGFcABFew/mNnU7Y93awR3NhbI1p9F/ zXAfPsjl2dCE23HBzX2wg1E4uQJcDUAghZvPqhd+hxM8LOq8OymzZIVpOdiOTD/156K58fZ5Pl0 VuuKsZLq1Rw8Z8JBzzEM0ndtD/q6jVJzbgXclMsjxYd0RLXoJXC6Kn2sASudyQF3WqA== X-Received: by 2002:a50:aed5:: with SMTP id f21mr4619075edd.120.1544828593327; Fri, 14 Dec 2018 15:03:13 -0800 (PST) X-Google-Smtp-Source: AFSGD/XBfRrZe4ngokEtQu2lKow2qEROLYdbEr4pMmauHJDeecmFGkMi0vCU3vFVzy6rrRWoj17g X-Received: by 2002:a50:aed5:: with SMTP id f21mr4619043edd.120.1544828592264; Fri, 14 Dec 2018 15:03:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544828592; cv=none; d=google.com; s=arc-20160816; b=H+LfHSdYBV9GdjfZ4mc/ueODpwM81d0VPsiyl1BYu9sMCPq7/KGZu4C6oodg8L5att NjXkxTyaEwW2iXyjtYEoEpUZzdUGKbC4A4bfWcn6YdB4WNKWMEVhVdLRsD1gxWVdtEMF 2wvGJ5EYMzd4rrkMNXEi3G0z4lHbVDNteaGQcKjeaCXiZWvsG0nCdU03p+SGCYhXbsz0 CeHRpxSzOqQ3mtUsam14UHhie2AnGpWl0o6wAwcJhE3wQyafx8a4E5ZDz6JWXUSo5Tax gQP7duml5dRjqj0F/j4eO5IHSfD5o8y2YxeQK0lF09ZmDE9Lh6AUwyRGH2tE9cOP1E0Q 47IA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=LANCfU6JkeojD8p0hJc/ua7hJ3reHx8NxLuFWTot2E4=; b=mAQX/ALj6+MGoRd+OHyzIRVCxE3PKQgFhKBd+a3RlmHSUTkGa3blqhZ+ofym1+fxtV lm+/GJhVgWrcTg+offnaY5QW7keJtyBy9VZ1s4/yTSCqSXBeq3wKZPanciM55JR84tti r7ZITGIw9w2yjXi8F+iEjC/cefyvC6uKfaibZvo0Wc2InkLwXGDjva2YyseJWpxi5Rp3 GbRWavn4GX6Rh/NbxhI4Vk3Qagx1x1J/hU4Ve25pKG/cI9t4i2tJHN5NG2PwdGKCYPjG sT0orW4bV6ReYAC2PWxWtkul8/SCetz7MXWn8MzvXRxtJvOQ9C9mCrEOQX9o8UaTy+9k 5N8w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 46.22.139.17 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net Received: from outbound-smtp12.blacknight.com (outbound-smtp12.blacknight.com. [46.22.139.17]) by mx.google.com with ESMTPS id r4si2360597edh.133.2018.12.14.15.03.11 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 14 Dec 2018 15:03:12 -0800 (PST) Received-SPF: pass (google.com: domain of mgorman@techsingularity.net designates 46.22.139.17 as permitted sender) client-ip=46.22.139.17; Authentication-Results: mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 46.22.139.17 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp12.blacknight.com (Postfix) with ESMTPS id C63C81C1D19 for ; Fri, 14 Dec 2018 23:03:11 +0000 (GMT) Received: (qmail 31225 invoked from network); 14 Dec 2018 23:03:11 -0000 Received: from unknown (HELO stampy.163woodhaven.lan) (mgorman@techsingularity.net@[37.228.245.71]) by 81.17.254.9 with ESMTPA; 14 Dec 2018 23:03:11 -0000 From: Mel Gorman To: Linux-MM Cc: David Rientjes , Andrea Arcangeli , Linus Torvalds , Michal Hocko , ying.huang@intel.com, kirill@shutemov.name, Andrew Morton , Linux List Kernel Mailing , Mel Gorman Subject: [PATCH 04/14] mm, compaction: Rename map_pages to split_map_pages Date: Fri, 14 Dec 2018 23:03:00 +0000 Message-Id: <20181214230310.572-5-mgorman@techsingularity.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20181214230310.572-1-mgorman@techsingularity.net> References: <20181214230310.572-1-mgorman@techsingularity.net> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP It's non-obvious that high-order free pages are split into order-0 pages from the function name. Fix it. Signed-off-by: Mel Gorman --- mm/compaction.c | 60 ++++++++++++++++++++++++++++----------------------------- 1 file changed, 29 insertions(+), 31 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index fb4d9f52ed56..3afa4e9188b6 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -66,7 +66,7 @@ static unsigned long release_freepages(struct list_head *freelist) return high_pfn; } -static void map_pages(struct list_head *list) +static void split_map_pages(struct list_head *list) { unsigned int i, order, nr_pages; struct page *page, *next; @@ -644,7 +644,7 @@ isolate_freepages_range(struct compact_control *cc, } /* __isolate_free_page() does not map the pages */ - map_pages(&freelist); + split_map_pages(&freelist); if (pfn < end_pfn) { /* Loop terminated early, cleanup. */ @@ -1141,7 +1141,7 @@ static void isolate_freepages(struct compact_control *cc) } /* __isolate_free_page() does not map the pages */ - map_pages(freelist); + split_map_pages(freelist); /* * Record where the free scanner will restart next time. Either we @@ -1300,8 +1300,7 @@ static inline bool is_via_compact_memory(int order) return order == -1; } -static enum compact_result __compact_finished(struct zone *zone, - struct compact_control *cc) +static enum compact_result __compact_finished(struct compact_control *cc) { unsigned int order; const int migratetype = cc->migratetype; @@ -1312,7 +1311,7 @@ static enum compact_result __compact_finished(struct zone *zone, /* Compaction run completes if the migrate and free scanner meet */ if (compact_scanners_met(cc)) { /* Let the next compaction start anew. */ - reset_cached_positions(zone); + reset_cached_positions(cc->zone); /* * Mark that the PG_migrate_skip information should be cleared @@ -1321,7 +1320,7 @@ static enum compact_result __compact_finished(struct zone *zone, * based on an allocation request. */ if (cc->direct_compaction) - zone->compact_blockskip_flush = true; + cc->zone->compact_blockskip_flush = true; if (cc->whole_zone) return COMPACT_COMPLETE; @@ -1345,7 +1344,7 @@ static enum compact_result __compact_finished(struct zone *zone, /* Direct compactor: Is a suitable page free? */ for (order = cc->order; order < MAX_ORDER; order++) { - struct free_area *area = &zone->free_area[order]; + struct free_area *area = &cc->zone->free_area[order]; bool can_steal; /* Job done if page is free of the right migratetype */ @@ -1391,13 +1390,12 @@ static enum compact_result __compact_finished(struct zone *zone, return COMPACT_NO_SUITABLE_PAGE; } -static enum compact_result compact_finished(struct zone *zone, - struct compact_control *cc) +static enum compact_result compact_finished(struct compact_control *cc) { int ret; - ret = __compact_finished(zone, cc); - trace_mm_compaction_finished(zone, cc->order, ret); + ret = __compact_finished(cc); + trace_mm_compaction_finished(cc->zone, cc->order, ret); if (ret == COMPACT_NO_SUITABLE_PAGE) ret = COMPACT_CONTINUE; @@ -1524,16 +1522,16 @@ bool compaction_zonelist_suitable(struct alloc_context *ac, int order, return false; } -static enum compact_result compact_zone(struct zone *zone, struct compact_control *cc) +static enum compact_result compact_zone(struct compact_control *cc) { enum compact_result ret; - unsigned long start_pfn = zone->zone_start_pfn; - unsigned long end_pfn = zone_end_pfn(zone); + unsigned long start_pfn = cc->zone->zone_start_pfn; + unsigned long end_pfn = zone_end_pfn(cc->zone); unsigned long last_migrated_pfn; const bool sync = cc->mode != MIGRATE_ASYNC; cc->migratetype = gfpflags_to_migratetype(cc->gfp_mask); - ret = compaction_suitable(zone, cc->order, cc->alloc_flags, + ret = compaction_suitable(cc->zone, cc->order, cc->alloc_flags, cc->classzone_idx); /* Compaction is likely to fail */ if (ret == COMPACT_SUCCESS || ret == COMPACT_SKIPPED) @@ -1546,8 +1544,8 @@ static enum compact_result compact_zone(struct zone *zone, struct compact_contro * Clear pageblock skip if there were failures recently and compaction * is about to be retried after being deferred. */ - if (compaction_restarting(zone, cc->order)) - __reset_isolation_suitable(zone); + if (compaction_restarting(cc->zone, cc->order)) + __reset_isolation_suitable(cc->zone); /* * Setup to move all movable pages to the end of the zone. Used cached @@ -1559,16 +1557,16 @@ static enum compact_result compact_zone(struct zone *zone, struct compact_contro cc->migrate_pfn = start_pfn; cc->free_pfn = pageblock_start_pfn(end_pfn - 1); } else { - cc->migrate_pfn = zone->compact_cached_migrate_pfn[sync]; - cc->free_pfn = zone->compact_cached_free_pfn; + cc->migrate_pfn = cc->zone->compact_cached_migrate_pfn[sync]; + cc->free_pfn = cc->zone->compact_cached_free_pfn; if (cc->free_pfn < start_pfn || cc->free_pfn >= end_pfn) { cc->free_pfn = pageblock_start_pfn(end_pfn - 1); - zone->compact_cached_free_pfn = cc->free_pfn; + cc->zone->compact_cached_free_pfn = cc->free_pfn; } if (cc->migrate_pfn < start_pfn || cc->migrate_pfn >= end_pfn) { cc->migrate_pfn = start_pfn; - zone->compact_cached_migrate_pfn[0] = cc->migrate_pfn; - zone->compact_cached_migrate_pfn[1] = cc->migrate_pfn; + cc->zone->compact_cached_migrate_pfn[0] = cc->migrate_pfn; + cc->zone->compact_cached_migrate_pfn[1] = cc->migrate_pfn; } if (cc->migrate_pfn == start_pfn) @@ -1582,11 +1580,11 @@ static enum compact_result compact_zone(struct zone *zone, struct compact_contro migrate_prep_local(); - while ((ret = compact_finished(zone, cc)) == COMPACT_CONTINUE) { + while ((ret = compact_finished(cc)) == COMPACT_CONTINUE) { int err; unsigned long start_pfn = cc->migrate_pfn; - switch (isolate_migratepages(zone, cc)) { + switch (isolate_migratepages(cc->zone, cc)) { case ISOLATE_ABORT: ret = COMPACT_CONTENDED; putback_movable_pages(&cc->migratepages); @@ -1653,7 +1651,7 @@ static enum compact_result compact_zone(struct zone *zone, struct compact_contro if (last_migrated_pfn < current_block_start) { cpu = get_cpu(); lru_add_drain_cpu(cpu); - drain_local_pages(zone); + drain_local_pages(cc->zone); put_cpu(); /* No more flushing until we migrate again */ last_migrated_pfn = 0; @@ -1678,8 +1676,8 @@ static enum compact_result compact_zone(struct zone *zone, struct compact_contro * Only go back, not forward. The cached pfn might have been * already reset to zone end in compact_finished() */ - if (free_pfn > zone->compact_cached_free_pfn) - zone->compact_cached_free_pfn = free_pfn; + if (free_pfn > cc->zone->compact_cached_free_pfn) + cc->zone->compact_cached_free_pfn = free_pfn; } count_compact_events(COMPACTMIGRATE_SCANNED, cc->total_migrate_scanned); @@ -1716,7 +1714,7 @@ static enum compact_result compact_zone_order(struct zone *zone, int order, INIT_LIST_HEAD(&cc.freepages); INIT_LIST_HEAD(&cc.migratepages); - ret = compact_zone(zone, &cc); + ret = compact_zone(&cc); VM_BUG_ON(!list_empty(&cc.freepages)); VM_BUG_ON(!list_empty(&cc.migratepages)); @@ -1834,7 +1832,7 @@ static void compact_node(int nid) INIT_LIST_HEAD(&cc.freepages); INIT_LIST_HEAD(&cc.migratepages); - compact_zone(zone, &cc); + compact_zone(&cc); VM_BUG_ON(!list_empty(&cc.freepages)); VM_BUG_ON(!list_empty(&cc.migratepages)); @@ -1976,7 +1974,7 @@ static void kcompactd_do_work(pg_data_t *pgdat) if (kthread_should_stop()) return; - status = compact_zone(zone, &cc); + status = compact_zone(&cc); if (status == COMPACT_SUCCESS) { compaction_defer_reset(zone, cc.order, false);