From patchwork Wed Sep 16 18:34:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11780469 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE4AB1580 for ; Wed, 16 Sep 2020 18:36:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B367221941 for ; Wed, 16 Sep 2020 18:36:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="gZypPG+e" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728043AbgIPSf7 (ORCPT ); Wed, 16 Sep 2020 14:35:59 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:35965 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728077AbgIPSec (ORCPT ); Wed, 16 Sep 2020 14:34:32 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600281271; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XwubkINL1Uor4z8FYu3b+W/GYdIzfTemmcEGbq8cpJ0=; b=gZypPG+erqxEeaGzEWs5ufq+bLTGhNCIQDtzOjeSAwROyYd7gJeoBO538DJmlhRl0U1e08 VC7OigXLY/EKam1y60afauBN6EJTiGxuOHKscel55ipxzovfvcBbW4AoU7yG2AwYQTOYfb LSPee+5FrnpFGD/FN82bfYLiHlbMIkg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-163-iu1WFwNHM5mEmTQLNOBZaA-1; Wed, 16 Sep 2020 14:34:26 -0400 X-MC-Unique: iu1WFwNHM5mEmTQLNOBZaA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5321B801AC4; Wed, 16 Sep 2020 18:34:24 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-190.ams2.redhat.com [10.36.113.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0BCE119D61; Wed, 16 Sep 2020 18:34:20 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , David Hildenbrand , Alexander Duyck , Mel Gorman , Michal Hocko , Dave Hansen , Vlastimil Babka , Wei Yang , Oscar Salvador , Mike Rapoport Subject: [PATCH RFC 1/4] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag Date: Wed, 16 Sep 2020 20:34:08 +0200 Message-Id: <20200916183411.64756-2-david@redhat.com> In-Reply-To: <20200916183411.64756-1-david@redhat.com> References: <20200916183411.64756-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: linux-acpi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Let's prepare for additional flags and avoid long parameter lists of bools. Follow-up patches will also make use of the flags in __free_pages_ok(), however, I wasn't able to come up with a better name for the type - should be good enough for internal purposes. Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Signed-off-by: David Hildenbrand Reviewed-by: Alexander Duyck Reviewed-by: Vlastimil Babka Reviewed-by: Oscar Salvador --- mm/page_alloc.c | 28 ++++++++++++++++++++-------- 1 file changed, 20 insertions(+), 8 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6b699d273d6e..91cefb8157dd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -77,6 +77,18 @@ #include "shuffle.h" #include "page_reporting.h" +/* Free One Page flags: for internal, non-pcp variants of free_pages(). */ +typedef int __bitwise fop_t; + +/* No special request */ +#define FOP_NONE ((__force fop_t)0) + +/* + * Skip free page reporting notification after buddy merging (will *not* mark + * the page reported, only skip the notification). + */ +#define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0)) + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_FRACTION (8) @@ -948,10 +960,9 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn, * -- nyc */ -static inline void __free_one_page(struct page *page, - unsigned long pfn, - struct zone *zone, unsigned int order, - int migratetype, bool report) +static inline void __free_one_page(struct page *page, unsigned long pfn, + struct zone *zone, unsigned int order, + int migratetype, fop_t fop_flags) { struct capture_control *capc = task_capc(zone); unsigned long buddy_pfn; @@ -1038,7 +1049,7 @@ static inline void __free_one_page(struct page *page, add_to_free_list(page, zone, order, migratetype); /* Notify page reporting subsystem of freed page */ - if (report) + if (!(fop_flags & FOP_SKIP_REPORT_NOTIFY)) page_reporting_notify_free(order); } @@ -1368,7 +1379,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, if (unlikely(isolated_pageblocks)) mt = get_pageblock_migratetype(page); - __free_one_page(page, page_to_pfn(page), zone, 0, mt, true); + __free_one_page(page, page_to_pfn(page), zone, 0, mt, FOP_NONE); trace_mm_page_pcpu_drain(page, 0, mt); } spin_unlock(&zone->lock); @@ -1384,7 +1395,7 @@ static void free_one_page(struct zone *zone, is_migrate_isolate(migratetype))) { migratetype = get_pfnblock_migratetype(page, pfn); } - __free_one_page(page, pfn, zone, order, migratetype, true); + __free_one_page(page, pfn, zone, order, migratetype, FOP_NONE); spin_unlock(&zone->lock); } @@ -3277,7 +3288,8 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt) lockdep_assert_held(&zone->lock); /* Return isolated page to tail of freelist. */ - __free_one_page(page, page_to_pfn(page), zone, order, mt, false); + __free_one_page(page, page_to_pfn(page), zone, order, mt, + FOP_SKIP_REPORT_NOTIFY); } /* From patchwork Wed Sep 16 18:34:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11780465 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 45C8D175A for ; Wed, 16 Sep 2020 18:35:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2C42021941 for ; Wed, 16 Sep 2020 18:35:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MU2yFnkP" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728141AbgIPSei (ORCPT ); Wed, 16 Sep 2020 14:34:38 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:47299 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728161AbgIPSef (ORCPT ); Wed, 16 Sep 2020 14:34:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600281272; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2jblkU4ogq5hsBAPlkwaxym1Fc+Tde0ELyzoer8oJwA=; b=MU2yFnkPBByz27ugSt8yo7OPKzf5pJOKXVV0VZLu1iC+NAY8huMH1KfYE9xKM8mN3noJWf 0QqhtBtCSjZZ54JBHyPWu4c00vK2PGH6WGrgn1cWlkmNHZYdQWbc300SyiG921hSlCWtZ/ drhw5B6Xq0MnCJntBv4DhIit7QmuMAc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-50-yst8dbbjNimjZj-WXTth4A-1; Wed, 16 Sep 2020 14:34:30 -0400 X-MC-Unique: yst8dbbjNimjZj-WXTth4A-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E4AF81091065; Wed, 16 Sep 2020 18:34:27 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-190.ams2.redhat.com [10.36.113.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id A2A3119D61; Wed, 16 Sep 2020 18:34:24 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , David Hildenbrand , Alexander Duyck , Mel Gorman , Michal Hocko , Dave Hansen , Vlastimil Babka , Wei Yang , Oscar Salvador , Mike Rapoport , Scott Cheloha , Michael Ellerman Subject: [PATCH RFC 2/4] mm/page_alloc: place pages to tail in __putback_isolated_page() Date: Wed, 16 Sep 2020 20:34:09 +0200 Message-Id: <20200916183411.64756-3-david@redhat.com> In-Reply-To: <20200916183411.64756-1-david@redhat.com> References: <20200916183411.64756-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: linux-acpi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org __putback_isolated_page() already documents that pages will be placed to the tail of the freelist - this is, however, not the case for "order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be the case for all existing users. This change affects two users: - free page reporting - page isolation, when undoing the isolation. This behavior is desireable for pages that haven't really been touched lately, so exactly the two users that don't actually read/write page content, but rather move untouched pages. The new behavior is especially desirable for memory onlining, where we allow allocation of newly onlined pages via undo_isolate_page_range() in online_pages(). Right now, we always place them to the head of the free list, resulting in undesireable behavior: Assume we add individual memory chunks via add_memory() and online them right away to the NORMAL zone. We create a dependency chain of unmovable allocations e.g., via the memmap. The memmap of the next chunk will be placed onto previous chunks - if the last block cannot get offlined+removed, all dependent ones cannot get offlined+removed. While this can already be observed with individual DIMMs, it's more of an issue for virtio-mem (and I suspect also ppc DLPAR). Note: If we observe a degradation due to the changed page isolation behavior (which I doubt), we can always make this configurable by the instance triggering undo of isolation (e.g., alloc_contig_range(), memory onlining, memory offlining). Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Cc: Scott Cheloha Cc: Michael Ellerman Signed-off-by: David Hildenbrand Reviewed-by: Alexander Duyck Reviewed-by: Oscar Salvador --- mm/page_alloc.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 91cefb8157dd..bba9a0f60c70 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -89,6 +89,12 @@ typedef int __bitwise fop_t; */ #define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0)) +/* + * Place the freed page to the tail of the freelist after buddy merging. Will + * get ignored with page shuffling enabled. + */ +#define FOP_TO_TAIL ((__force fop_t)BIT(1)) + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_FRACTION (8) @@ -1040,6 +1046,8 @@ static inline void __free_one_page(struct page *page, unsigned long pfn, if (is_shuffle_order(order)) to_tail = shuffle_pick_tail(); + else if (fop_flags & FOP_TO_TAIL) + to_tail = true; else to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order); @@ -3289,7 +3297,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt) /* Return isolated page to tail of freelist. */ __free_one_page(page, page_to_pfn(page), zone, order, mt, - FOP_SKIP_REPORT_NOTIFY); + FOP_SKIP_REPORT_NOTIFY | FOP_TO_TAIL); } /* From patchwork Wed Sep 16 18:34:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11780467 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AF8121709 for ; Wed, 16 Sep 2020 18:35:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 955FC221EB for ; Wed, 16 Sep 2020 18:35:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="d03AaQ08" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727958AbgIPSfz (ORCPT ); Wed, 16 Sep 2020 14:35:55 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:21899 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728168AbgIPSej (ORCPT ); Wed, 16 Sep 2020 14:34:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600281278; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0t7j7jP+qkUR+trEToGItxDCR3dDkhRdZQY/0mmtExE=; b=d03AaQ08GJF0wBzkDoTPF7l3twwfnxc+yrVWMcwCXox5qhHMjnYivQj/wBTnPbgz7kLXKO iKcCZxndRaHgiUEd4A6u5yZWocTE4Cz1PE3MaP7fsuzvNzBa5vOMe6X+3QLpRs7JNerDjS 58ylHihe7WVQ3tB+OSNp+NEYR8nUSbQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-292-b6HDvSNPMlaeLBzxJvygUg-1; Wed, 16 Sep 2020 14:34:33 -0400 X-MC-Unique: b6HDvSNPMlaeLBzxJvygUg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 78468186DD2A; Wed, 16 Sep 2020 18:34:31 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-190.ams2.redhat.com [10.36.113.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3FEFF19D61; Wed, 16 Sep 2020 18:34:28 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , David Hildenbrand , Alexander Duyck , Mel Gorman , Michal Hocko , Dave Hansen , Vlastimil Babka , Wei Yang , Oscar Salvador , Mike Rapoport , Scott Cheloha , Michael Ellerman Subject: [PATCH RFC 3/4] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate() Date: Wed, 16 Sep 2020 20:34:10 +0200 Message-Id: <20200916183411.64756-4-david@redhat.com> In-Reply-To: <20200916183411.64756-1-david@redhat.com> References: <20200916183411.64756-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: linux-acpi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Page isolation doesn't actually touch the pages, it simply isolates pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist. We already place pages to the tail of the freelists when undoing isolation via __putback_isolated_page(), let's do it in any case (e.g., if order == pageblock_order) and document the behavior. This change results in all pages getting onlined via online_pages() to be placed to the tail of the freelist. Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Cc: Scott Cheloha Cc: Michael Ellerman Signed-off-by: David Hildenbrand Reviewed-by: Oscar Salvador --- include/linux/page-isolation.h | 2 ++ mm/page_alloc.c | 36 +++++++++++++++++++++++++++++----- mm/page_isolation.c | 8 ++++++-- 3 files changed, 39 insertions(+), 7 deletions(-) diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 572458016331..a36be2cf4dbb 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -38,6 +38,8 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page, void set_pageblock_migratetype(struct page *page, int migratetype); int move_freepages_block(struct zone *zone, struct page *page, int migratetype, int *num_movable); +int move_freepages_block_tail(struct zone *zone, struct page *page, + int migratetype); /* * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index bba9a0f60c70..75b0f49b4022 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -899,6 +899,15 @@ static inline void move_to_free_list(struct page *page, struct zone *zone, list_move(&page->lru, &area->free_list[migratetype]); } +/* Used for pages which are on another list */ +static inline void move_to_free_list_tail(struct page *page, struct zone *zone, + unsigned int order, int migratetype) +{ + struct free_area *area = &zone->free_area[order]; + + list_move_tail(&page->lru, &area->free_list[migratetype]); +} + static inline void del_page_from_free_list(struct page *page, struct zone *zone, unsigned int order) { @@ -2323,7 +2332,7 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone, */ static int move_freepages(struct zone *zone, struct page *start_page, struct page *end_page, - int migratetype, int *num_movable) + int migratetype, int *num_movable, bool to_tail) { struct page *page; unsigned int order; @@ -2354,7 +2363,10 @@ static int move_freepages(struct zone *zone, VM_BUG_ON_PAGE(page_zone(page) != zone, page); order = page_order(page); - move_to_free_list(page, zone, order, migratetype); + if (to_tail) + move_to_free_list_tail(page, zone, order, migratetype); + else + move_to_free_list(page, zone, order, migratetype); page += 1 << order; pages_moved += 1 << order; } @@ -2362,8 +2374,9 @@ static int move_freepages(struct zone *zone, return pages_moved; } -int move_freepages_block(struct zone *zone, struct page *page, - int migratetype, int *num_movable) +static int __move_freepages_block(struct zone *zone, struct page *page, + int migratetype, int *num_movable, + bool to_tail) { unsigned long start_pfn, end_pfn; struct page *start_page, *end_page; @@ -2384,7 +2397,20 @@ int move_freepages_block(struct zone *zone, struct page *page, return 0; return move_freepages(zone, start_page, end_page, migratetype, - num_movable); + num_movable, to_tail); +} + +int move_freepages_block(struct zone *zone, struct page *page, + int migratetype, int *num_movable) +{ + return __move_freepages_block(zone, page, migratetype, num_movable, + false); +} + +int move_freepages_block_tail(struct zone *zone, struct page *page, + int migratetype) +{ + return __move_freepages_block(zone, page, migratetype, NULL, true); } static void change_pageblock_range(struct page *pageblock_page, diff --git a/mm/page_isolation.c b/mm/page_isolation.c index abfe26ad59fd..84aa1d14751d 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -83,7 +83,7 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype) * Because freepage with more than pageblock_order on isolated * pageblock is restricted to merge due to freepage counting problem, * it is possible that there is free buddy page. - * move_freepages_block() doesn't care of merge so we need other + * move_freepages_block*() don't care about merging, so we need another * approach in order to merge them. Isolation and free will make * these pages to be merged. */ @@ -106,9 +106,13 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype) * If we isolate freepage with more than pageblock_order, there * should be no freepage in the range, so we could avoid costly * pageblock scanning for freepage moving. + * + * We didn't actually touch any of the isolated pages, so place them + * to the tail of the freelists. This is especially relevant during + * memory onlining. */ if (!isolated_page) { - nr_pages = move_freepages_block(zone, page, migratetype, NULL); + nr_pages = move_freepages_block_tail(zone, page, migratetype); __mod_zone_freepage_state(zone, nr_pages, migratetype); } set_pageblock_migratetype(page, migratetype); From patchwork Wed Sep 16 18:34:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11780455 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A0DF6CA for ; Wed, 16 Sep 2020 18:35:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2A3AE206B5 for ; Wed, 16 Sep 2020 18:35:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UEIQBRiB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727989AbgIPSfG (ORCPT ); Wed, 16 Sep 2020 14:35:06 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:50333 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728183AbgIPSeu (ORCPT ); Wed, 16 Sep 2020 14:34:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600281288; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=K36Yi+Yui5jwLqHmtqjUnD4ieo4W1F/4OXFgJRadTHo=; b=UEIQBRiBslD5esuuXS7ImLJ/Fnl9fbg45R1uAYXwAN5tRt2dx3b+lm20gxf/4qD3Oni/C/ LVmfeWBtAaomten0ZinrcZttJOtEaBULZU2EMVuy3WLx916oKZaSmp8CqKWwDKqL9/I4v1 SLvSBIbVRNsT8gvjC5+rXJOeauGcDws= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-163-yOMc6tw1N0S2tGGPouOSjQ-1; Wed, 16 Sep 2020 14:34:44 -0400 X-MC-Unique: yOMc6tw1N0S2tGGPouOSjQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5453E801AC4; Wed, 16 Sep 2020 18:34:41 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-190.ams2.redhat.com [10.36.113.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id C9EA419D61; Wed, 16 Sep 2020 18:34:31 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , David Hildenbrand , Alexander Duyck , Mel Gorman , Michal Hocko , Dave Hansen , Vlastimil Babka , Wei Yang , Oscar Salvador , Mike Rapoport , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu Subject: [PATCH RFC 4/4] mm/page_alloc: place pages to tail in __free_pages_core() Date: Wed, 16 Sep 2020 20:34:11 +0200 Message-Id: <20200916183411.64756-5-david@redhat.com> In-Reply-To: <20200916183411.64756-1-david@redhat.com> References: <20200916183411.64756-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: linux-acpi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org __free_pages_core() is used when exposing fresh memory to the buddy during system boot and when onlining memory in generic_online_page(). generic_online_page() is used in two cases: 1. Direct memory onlining in online_pages(). 2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV balloon and virtio-mem), when parts of a section are kept fake-offline to be fake-onlined later on. In 1, we already place pages to the tail of the freelist. Pages will be freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists via undo_isolate_page_range(). In 2, we currently don't implement a proper rule. In case of virtio-mem, where we currently always online MAX_ORDER - 1 pages, the pages will be placed to the HEAD of the freelist - undesireable. While the hyper-v balloon calls generic_online_page() with single pages, usually it will call it on successive single pages in a larger block. The pages are fresh, so place them to the tail of the freelists and avoid the PCP. Note: If we detect that the new behavior is undesireable for __free_pages_core() during boot, we can let the caller specify the behavior. Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Cc: "K. Y. Srinivasan" Cc: Haiyang Zhang Cc: Stephen Hemminger Cc: Wei Liu Signed-off-by: David Hildenbrand Reviewed-by: Vlastimil Babka Reviewed-by: Oscar Salvador --- mm/page_alloc.c | 32 ++++++++++++++++++++------------ 1 file changed, 20 insertions(+), 12 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 75b0f49b4022..50746e6dc21b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -264,7 +264,8 @@ bool pm_suspended_storage(void) unsigned int pageblock_order __read_mostly; #endif -static void __free_pages_ok(struct page *page, unsigned int order); +static void __free_pages_ok(struct page *page, unsigned int order, + fop_t fop_flags); /* * results with 256, 32 in the lowmem_reserve sysctl: @@ -676,7 +677,7 @@ static void bad_page(struct page *page, const char *reason) void free_compound_page(struct page *page) { mem_cgroup_uncharge(page); - __free_pages_ok(page, compound_order(page)); + __free_pages_ok(page, compound_order(page), FOP_NONE); } void prep_compound_page(struct page *page, unsigned int order) @@ -1402,17 +1403,15 @@ static void free_pcppages_bulk(struct zone *zone, int count, spin_unlock(&zone->lock); } -static void free_one_page(struct zone *zone, - struct page *page, unsigned long pfn, - unsigned int order, - int migratetype) +static void free_one_page(struct zone *zone, struct page *page, unsigned long pfn, + unsigned int order, int migratetype, fop_t fop_flags) { spin_lock(&zone->lock); if (unlikely(has_isolate_pageblock(zone) || is_migrate_isolate(migratetype))) { migratetype = get_pfnblock_migratetype(page, pfn); } - __free_one_page(page, pfn, zone, order, migratetype, FOP_NONE); + __free_one_page(page, pfn, zone, order, migratetype, fop_flags); spin_unlock(&zone->lock); } @@ -1490,7 +1489,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end) } } -static void __free_pages_ok(struct page *page, unsigned int order) +static void __free_pages_ok(struct page *page, unsigned int order, + fop_t fop_flags) { unsigned long flags; int migratetype; @@ -1502,7 +1502,8 @@ static void __free_pages_ok(struct page *page, unsigned int order) migratetype = get_pfnblock_migratetype(page, pfn); local_irq_save(flags); __count_vm_events(PGFREE, 1 << order); - free_one_page(page_zone(page), page, pfn, order, migratetype); + free_one_page(page_zone(page), page, pfn, order, migratetype, + fop_flags); local_irq_restore(flags); } @@ -1523,7 +1524,13 @@ void __free_pages_core(struct page *page, unsigned int order) atomic_long_add(nr_pages, &page_zone(page)->managed_pages); set_page_refcounted(page); - __free_pages(page, order); + + /* + * Bypass PCP and place fresh pages right to the tail, primarily + * relevant for memory onlining. + */ + page_ref_dec(page); + __free_pages_ok(page, order, FOP_TO_TAIL); } #ifdef CONFIG_NEED_MULTIPLE_NODES @@ -3167,7 +3174,8 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn) */ if (migratetype >= MIGRATE_PCPTYPES) { if (unlikely(is_migrate_isolate(migratetype))) { - free_one_page(zone, page, pfn, 0, migratetype); + free_one_page(zone, page, pfn, 0, migratetype, + FOP_NONE); return; } migratetype = MIGRATE_MOVABLE; @@ -4984,7 +4992,7 @@ static inline void free_the_page(struct page *page, unsigned int order) if (order == 0) /* Via pcp? */ free_unref_page(page); else - __free_pages_ok(page, order); + __free_pages_ok(page, order, FOP_NONE); } void __free_pages(struct page *page, unsigned int order)