From patchwork Fri Jan 18 17:51:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 10771383 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C8E9B6C5 for ; Fri, 18 Jan 2019 17:54:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B469F2FF7F for ; Fri, 18 Jan 2019 17:54:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A6F722FFB8; Fri, 18 Jan 2019 17:54:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2EA182FF7F for ; Fri, 18 Jan 2019 17:54:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2CAA48E001F; Fri, 18 Jan 2019 12:54:32 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 278758E0002; Fri, 18 Jan 2019 12:54:32 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 168578E001F; Fri, 18 Jan 2019 12:54:32 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) by kanga.kvack.org (Postfix) with ESMTP id AD57D8E0002 for ; Fri, 18 Jan 2019 12:54:31 -0500 (EST) Received: by mail-ed1-f71.google.com with SMTP id v4so5126000edm.18 for ; Fri, 18 Jan 2019 09:54:31 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=svAeuwrNxudtwt6CtwTN2Hw03wYxbsBKu/JBqRy4INM=; b=kb4R/N3b+BA107lI4gILFokG4aA94pltPmaoE7zvA0Uut7PSilqTwRQPkrJfgbHw9m MUMD1IGk/Lakw2iMmtrUf55fgf2RNe2otJgCsm9gC5MTgXJC0fputFOlfK/FlgHLo/rb 2pi+0qCtaTlpKN0Q2IDaszKS/Y61qapCeIzNu79MK1grP0HBvNiMnXrR04lRnZ4w3Mv/ yC7WS7TOjmKnvwnSeTipM9tYlM50uUtpGAJdgk5xb9sJzjZfqrCJHVVvkwJAIt7jwIMe SynOMaZAFjaJ1jGEN9sqLFDsbqkN6lIP9M6x+sBK1MSCCp0GMs8ilVJhNNG/OnUj0MI9 gqLg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 81.17.249.194 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Gm-Message-State: AJcUukcrdBn4FQzdocnk2Rt7HCbxx75cq3j+DzLT9X3mECByFplgJxyn exL6kBAS8y1t+vWE1SkJ7ZDRlrkhfQmtpaOiFxPUsO1lNxWjES4hijhe0MGA0i6RiwQJxfEv0LT l8OQztvXHlsNKSUu3j7Nz8kzf2dciohzWnkhD75Ey1AWg58STBEjBuN3HVcZgDy2V8Q== X-Received: by 2002:a17:906:734c:: with SMTP id h12-v6mr15115804ejl.50.1547834071177; Fri, 18 Jan 2019 09:54:31 -0800 (PST) X-Google-Smtp-Source: ALg8bN6nN5+xM6IsZ03G+4OWgnMjJxzJsdLwVxKUsS7r0fnD+XE2x3k7MnjJO9WQxamPKuwDZpZ9 X-Received: by 2002:a17:906:734c:: with SMTP id h12-v6mr15115755ejl.50.1547834070057; Fri, 18 Jan 2019 09:54:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547834070; cv=none; d=google.com; s=arc-20160816; b=CEHbLtazmwMZ0kKta25/6xVFK9bbjgPpZ4GAbkNUCo+/miu9ajVIAeghmTVksy5ebj K+OGLGm0qCQ18mufwBYILogwcmURp6C/3jPQGjWZLNVIFqai0DK8H+ZUQ1VvUuAuOOaK jj9sM9t5cYZko42zrJ9WczUgkHzS67FKtcO+xH+1o/fr4UiCS3fcMcn8wTy+/BKBZj5z ZYgKuaaN5Umftw2IOkuqloGBF05+erVABzBGP9xCVx4ikqRG5921ANGuoidJTnsgGirr 8MBzX2JYfgHD2uuuiSIqH8y9NlojCL2PQD8h990QnJq9nkprDFnZKGJXpkWE2Im5CXhQ OowQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=svAeuwrNxudtwt6CtwTN2Hw03wYxbsBKu/JBqRy4INM=; b=HtBxjNIg9h+C0mQgTmZxjCkhikW6ri329FmBUV8XEpJXEleQptO2D0C4SV7XRMew4e Kmv40mDKfz6IQVHfQRvqgh3/jJUJRTTIhqH/rZLtSf8oONx4QMUMA3XBva/BJgBobIjB hTvhOdUP5EWI5iZY/eG7wzRaRRM3Hea5+0e+vyWXyoZHbJjULHy6FgfmwyYaxSmNzRVw XmCsb5T0ErcxE3BDaEZbC32En5TvoM+96sX7hEvqYxS8ymejUcTBSxPTmUwmlYxj1Adi 0nYqHp4yPSRf3I7J37dBGWadG/ZClstYtTZkyvJXYVC8XrlRyFQ2YDW1LehEyjMTVZ3N xHEA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 81.17.249.194 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net Received: from outbound-smtp26.blacknight.com (outbound-smtp26.blacknight.com. [81.17.249.194]) by mx.google.com with ESMTPS id 18-v6si236742ejo.149.2019.01.18.09.54.29 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 18 Jan 2019 09:54:30 -0800 (PST) Received-SPF: pass (google.com: domain of mgorman@techsingularity.net designates 81.17.249.194 as permitted sender) client-ip=81.17.249.194; Authentication-Results: mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 81.17.249.194 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp26.blacknight.com (Postfix) with ESMTPS id C0B4BB879A for ; Fri, 18 Jan 2019 17:54:29 +0000 (GMT) Received: (qmail 6551 invoked from network); 18 Jan 2019 17:54:29 -0000 Received: from unknown (HELO stampy.163woodhaven.lan) (mgorman@techsingularity.net@[37.228.229.96]) by 81.17.254.9 with ESMTPA; 18 Jan 2019 17:54:29 -0000 From: Mel Gorman To: Andrew Morton Cc: David Rientjes , Andrea Arcangeli , Vlastimil Babka , Linux List Kernel Mailing , Linux-MM , Mel Gorman Subject: [PATCH 16/22] mm, compaction: Rework compact_should_abort as compact_check_resched Date: Fri, 18 Jan 2019 17:51:30 +0000 Message-Id: <20190118175136.31341-17-mgorman@techsingularity.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190118175136.31341-1-mgorman@techsingularity.net> References: <20190118175136.31341-1-mgorman@techsingularity.net> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP With incremental changes, compact_should_abort no longer makes any documented sense. Rename to compact_check_resched and update the associated comments. There is no benefit other than reducing redundant code and making the intent slightly clearer. It could potentially be merged with earlier patches but it just makes the review slightly harder. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka --- mm/compaction.c | 61 ++++++++++++++++++++++----------------------------------- 1 file changed, 23 insertions(+), 38 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 829540f6f3da..9aa71945255d 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -404,6 +404,21 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, return true; } +/* + * Aside from avoiding lock contention, compaction also periodically checks + * need_resched() and records async compaction as contended if necessary. + */ +static inline void compact_check_resched(struct compact_control *cc) +{ + /* async compaction aborts if contended */ + if (need_resched()) { + if (cc->mode == MIGRATE_ASYNC) + cc->contended = true; + + cond_resched(); + } +} + /* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. The lock should be periodically unlocked to avoid @@ -432,33 +447,7 @@ static bool compact_unlock_should_abort(spinlock_t *lock, return true; } - if (need_resched()) { - if (cc->mode == MIGRATE_ASYNC) - cc->contended = true; - cond_resched(); - } - - return false; -} - -/* - * Aside from avoiding lock contention, compaction also periodically checks - * need_resched() and either schedules in sync compaction or aborts async - * compaction. This is similar to what compact_unlock_should_abort() does, but - * is used where no lock is concerned. - * - * Returns false when no scheduling was needed, or sync compaction scheduled. - * Returns true when async compaction should abort. - */ -static inline bool compact_should_abort(struct compact_control *cc) -{ - /* async compaction aborts if contended */ - if (need_resched()) { - if (cc->mode == MIGRATE_ASYNC) - cc->contended = true; - - cond_resched(); - } + compact_check_resched(cc); return false; } @@ -747,8 +736,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, return 0; } - if (compact_should_abort(cc)) - return 0; + compact_check_resched(cc); if (cc->direct_compaction && (cc->mode == MIGRATE_ASYNC)) { skip_on_failure = true; @@ -1379,12 +1367,10 @@ static void isolate_freepages(struct compact_control *cc) isolate_start_pfn = block_start_pfn) { /* * This can iterate a massively long zone without finding any - * suitable migration targets, so periodically check if we need - * to schedule, or even abort async compaction. + * suitable migration targets, so periodically check resched. */ - if (!(block_start_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages)) - && compact_should_abort(cc)) - break; + if (!(block_start_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))) + compact_check_resched(cc); page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn, zone); @@ -1675,11 +1661,10 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone, /* * This can potentially iterate a massively long zone with * many pageblocks unsuitable, so periodically check if we - * need to schedule, or even abort async compaction. + * need to schedule. */ - if (!(low_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages)) - && compact_should_abort(cc)) - break; + if (!(low_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))) + compact_check_resched(cc); page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn, zone);