From patchwork Mon Apr 25 14:31:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12825850 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41F3CC433F5 for ; Mon, 25 Apr 2022 14:31:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C7BE6B0080; Mon, 25 Apr 2022 10:31:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7DC796B0085; Mon, 25 Apr 2022 10:31:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E94D16B007B; Mon, 25 Apr 2022 10:31:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id A8D1B6B0081 for ; Mon, 25 Apr 2022 10:31:33 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 239BD27E03 for ; Mon, 25 Apr 2022 14:31:32 +0000 (UTC) X-FDA: 79395639624.26.A728891 Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by imf31.hostedemail.com (Postfix) with ESMTP id AAD682004B for ; Mon, 25 Apr 2022 14:31:24 +0000 (UTC) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id 2C26B5C0185; Mon, 25 Apr 2022 10:31:31 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Mon, 25 Apr 2022 10:31:31 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to; s=fm1; t= 1650897091; x=1650983491; bh=FGuSjq7Qo3xUzFw7F3hZNaEuHq5HI+jj6I+ FwTpeURQ=; b=PMgvB8euRccq21wM7F4HnHrBKTiTgoZEpN3WVqB5zlpFZxbrB48 qhfl/a7b8yXeFFi3Xo0fNpxjbOzr998/Bcq1/6FxoX7iCxu4bMMuBfOY8faEBNCM exGviOuRiFZXq9CnnfR6DyJ/HpUzPMVYYv6f8k8BQCcVMFZAvOoxU9jO3eZkdD6A SEej2hXqTRrTnVmKtGKdnMVcxyRwVxcVkpLzfML5EE6l2xCwsJsnk4QhXKukybla PmYZcwpo3flYMI2amf9m5hD/2RgjBdV4hzHRtMQFod1gXz5Bn9Q/JANUgYmeio/2 J2CAAPclPa0iEjDu/Mo0aG6DvfxYyXtXdbA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1650897091; x=1650983491; bh=F GuSjq7Qo3xUzFw7F3hZNaEuHq5HI+jj6I+FwTpeURQ=; b=qF6TRLPv4pfzD9D9/ Va14A9OhcHzxGGmwchLkfogHJk8Kw/w3urkm17g1zS7m5zLGfrkHltP491iRgIlL SjVWPErSQ6CHljQY1LzsKrHF8LMxSbfXjVEv0RODooUTNTHkqi15kT/KqCjSKVe1 RZ4tR2vxCLxmJjSf5Ik2WhoxTTR0XqWmkPyjTUKLJWbvM0SShmVESvM6vo2xG+x5 w5cKHyDn8FBsn22O2i/GNF2hbLAq/X8yx9bOlsYlpwxEMJT9UB4fx54Je+Bw5hF0 KvV5szFX8WUati0aY0SlUcEGqhdbY9VcXGEDQy2ce5BvdG9hiNbfUBPOaqEh2J1d ZH1Qw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddugdejiecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvfevufffkffojghfrhggtgfgsehtqhertdertdejnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeevhe evleehgfduhffhudefteehhfegjeeiudevheevffetuedttdfhkeeuleehudenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 25 Apr 2022 10:31:30 -0400 (EDT) From: Zi Yan To: David Hildenbrand , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Vlastimil Babka , Mel Gorman , Eric Ren , Mike Rapoport , Oscar Salvador , Christophe Leroy , Andrew Morton , Zi Yan Subject: [PATCH v11 4/6] mm: page_isolation: enable arbitrary range page isolation. Date: Mon, 25 Apr 2022 10:31:16 -0400 Message-Id: <20220425143118.2850746-5-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220425143118.2850746-1-zi.yan@sent.com> References: <20220425143118.2850746-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Queue-Id: AAD682004B X-Stat-Signature: m8oa85gja18ntckbymqg5n4596rjxkuj Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b=PMgvB8eu; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=qF6TRLPv; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf31.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1650897084-886774 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan Now start_isolate_page_range() is ready to handle arbitrary range isolation, so move the alignment check/adjustment into the function body. Do the same for its counterpart undo_isolate_page_range(). alloc_contig_range(), its caller, can pass an arbitrary range instead of a MAX_ORDER_NR_PAGES aligned one. Signed-off-by: Zi Yan Signed-off-by: Zi Yan Signed-off-by: Andrew Morton --- mm/page_alloc.c | 16 ++-------------- mm/page_isolation.c | 33 ++++++++++++++++----------------- 2 files changed, 18 insertions(+), 31 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 70ddd9a0bcf3..a002cf12eb6c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8924,16 +8924,6 @@ void *__init alloc_large_system_hash(const char *tablename, } #ifdef CONFIG_CONTIG_ALLOC -static unsigned long pfn_max_align_down(unsigned long pfn) -{ - return ALIGN_DOWN(pfn, MAX_ORDER_NR_PAGES); -} - -static unsigned long pfn_max_align_up(unsigned long pfn) -{ - return ALIGN(pfn, MAX_ORDER_NR_PAGES); -} - #if defined(CONFIG_DYNAMIC_DEBUG) || \ (defined(CONFIG_DYNAMIC_DEBUG_CORE) && defined(DYNAMIC_DEBUG_MODULE)) /* Usage: See admin-guide/dynamic-debug-howto.rst */ @@ -9075,8 +9065,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, * put back to page allocator so that buddy can use them. */ - ret = start_isolate_page_range(pfn_max_align_down(start), - pfn_max_align_up(end), migratetype, 0, gfp_mask); + ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask); if (ret) goto done; @@ -9157,8 +9146,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, free_contig_range(end, outer_end - end); done: - undo_isolate_page_range(pfn_max_align_down(start), - pfn_max_align_up(end), migratetype); + undo_isolate_page_range(start, end, migratetype); return ret; } EXPORT_SYMBOL(alloc_contig_range); diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 94b3467e5ba2..75e454f5cf45 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -435,7 +435,6 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, gfp_t gfp_flags, * be MIGRATE_ISOLATE. * @start_pfn: The lower PFN of the range to be isolated. * @end_pfn: The upper PFN of the range to be isolated. - * start_pfn/end_pfn must be aligned to pageblock_order. * @migratetype: Migrate type to set in error recovery. * @flags: The following flags are allowed (they can be combined in * a bit mask) @@ -482,33 +481,33 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, { unsigned long pfn; struct page *page; + /* isolation is done at page block granularity */ + unsigned long isolate_start = ALIGN_DOWN(start_pfn, pageblock_nr_pages); + unsigned long isolate_end = ALIGN(end_pfn, pageblock_nr_pages); int ret; - BUG_ON(!IS_ALIGNED(start_pfn, pageblock_nr_pages)); - BUG_ON(!IS_ALIGNED(end_pfn, pageblock_nr_pages)); - - /* isolate [start_pfn, start_pfn + pageblock_nr_pages) pageblock */ - ret = isolate_single_pageblock(start_pfn, gfp_flags, false); + /* isolate [isolate_start, isolate_start + pageblock_nr_pages) pageblock */ + ret = isolate_single_pageblock(isolate_start, gfp_flags, false); if (ret) return ret; - /* isolate [end_pfn - pageblock_nr_pages, end_pfn) pageblock */ - ret = isolate_single_pageblock(end_pfn, gfp_flags, true); + /* isolate [isolate_end - pageblock_nr_pages, isolate_end) pageblock */ + ret = isolate_single_pageblock(isolate_end, gfp_flags, true); if (ret) { - unset_migratetype_isolate(pfn_to_page(start_pfn), migratetype); + unset_migratetype_isolate(pfn_to_page(isolate_start), migratetype); return ret; } /* skip isolated pageblocks at the beginning and end */ - for (pfn = start_pfn + pageblock_nr_pages; - pfn < end_pfn - pageblock_nr_pages; + for (pfn = isolate_start + pageblock_nr_pages; + pfn < isolate_end - pageblock_nr_pages; pfn += pageblock_nr_pages) { page = __first_valid_page(pfn, pageblock_nr_pages); if (page && set_migratetype_isolate(page, migratetype, flags, start_pfn, end_pfn)) { - undo_isolate_page_range(start_pfn, pfn, migratetype); + undo_isolate_page_range(isolate_start, pfn, migratetype); unset_migratetype_isolate( - pfn_to_page(end_pfn - pageblock_nr_pages), + pfn_to_page(isolate_end - pageblock_nr_pages), migratetype); return -EBUSY; } @@ -524,12 +523,12 @@ void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, { unsigned long pfn; struct page *page; + unsigned long isolate_start = ALIGN_DOWN(start_pfn, pageblock_nr_pages); + unsigned long isolate_end = ALIGN(end_pfn, pageblock_nr_pages); - BUG_ON(!IS_ALIGNED(start_pfn, pageblock_nr_pages)); - BUG_ON(!IS_ALIGNED(end_pfn, pageblock_nr_pages)); - for (pfn = start_pfn; - pfn < end_pfn; + for (pfn = isolate_start; + pfn < isolate_end; pfn += pageblock_nr_pages) { page = __first_valid_page(pfn, pageblock_nr_pages); if (!page || !is_migrate_isolate_page(page))