From patchwork Mon Apr 18 14:12:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miaohe Lin X-Patchwork-Id: 12816646 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59356C433FE for ; Mon, 18 Apr 2022 14:12:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F39DD6B00A4; Mon, 18 Apr 2022 10:12:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E70776B00A5; Mon, 18 Apr 2022 10:12:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC4126B00A7; Mon, 18 Apr 2022 10:12:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0064.hostedemail.com [216.40.44.64]) by kanga.kvack.org (Postfix) with ESMTP id A86CD6B00A4 for ; Mon, 18 Apr 2022 10:12:35 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 61C8FA32DD for ; Mon, 18 Apr 2022 14:12:35 +0000 (UTC) X-FDA: 79370190270.30.03CD84E Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf26.hostedemail.com (Postfix) with ESMTP id BBBCD140007 for ; Mon, 18 Apr 2022 14:12:34 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KhpmB73sszhXWg; Mon, 18 Apr 2022 22:12:26 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 18 Apr 2022 22:12:31 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 11/12] mm: compaction: simplify the code in __compact_finished Date: Mon, 18 Apr 2022 22:12:52 +0800 Message-ID: <20220418141253.24298-12-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220418141253.24298-1-linmiaohe@huawei.com> References: <20220418141253.24298-1-linmiaohe@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: BBBCD140007 X-Stat-Signature: sfrci1zkn14938bu71fmnt3uyi7t9up7 Authentication-Results: imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-HE-Tag: 1650291154-35763 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since commit efe771c7603b ("mm, compaction: always finish scanning of a full pageblock"), compaction will always finish scanning a pageblock. And migrate_pfn is assured to align with pageblock_nr_pages when we reach here. So we will always return COMPACT_SUCCESS if a suitable fallback is found due to the below IS_ALIGNED check of migrate_pfn. Simplify the code to make this clear and improve the readability. No functional change intended. Signed-off-by: Miaohe Lin --- mm/compaction.c | 29 ++++++++--------------------- 1 file changed, 8 insertions(+), 21 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 334a573485fe..609a76d7e051 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2134,29 +2134,16 @@ static enum compact_result __compact_finished(struct compact_control *cc) * other migratetype buddy lists. */ if (find_suitable_fallback(area, order, migratetype, - true, &can_steal) != -1) { - - /* movable pages are OK in any pageblock */ - if (migratetype == MIGRATE_MOVABLE) - return COMPACT_SUCCESS; - + true, &can_steal) != -1) /* - * We are stealing for a non-movable allocation. Make - * sure we finish compacting the current pageblock - * first so it is as free as possible and we won't - * have to steal another one soon. This only applies - * to sync compaction, as async compaction operates - * on pageblocks of the same migratetype. + * Movable pages are OK in any pageblock. If we are + * stealing for a non-movable allocation, make sure + * we finish compacting the current pageblock first + * (which is assured by the above migrate_pfn align + * check) so it is as free as possible and we won't + * have to steal another one soon. */ - if (cc->mode == MIGRATE_ASYNC || - IS_ALIGNED(cc->migrate_pfn, - pageblock_nr_pages)) { - return COMPACT_SUCCESS; - } - - ret = COMPACT_CONTINUE; - break; - } + return COMPACT_SUCCESS; } out: