From patchwork Mon Oct 5 12:15:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11816521 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 72F41618 for ; Mon, 5 Oct 2020 12:16:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 57D8A208C3 for ; Mon, 5 Oct 2020 12:16:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FaCqxhCK" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725960AbgJEMQH (ORCPT ); Mon, 5 Oct 2020 08:16:07 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:45609 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726363AbgJEMQH (ORCPT ); Mon, 5 Oct 2020 08:16:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601900165; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3jnWAK3O4yILdw4ZP0mQyIXKP0/Hh52uEJ+0mn35WIQ=; b=FaCqxhCKW1Gu1JSOMoatETAbCAz+HmGzA5k6KPvnhCKRg3jgAPZSIdQLhzddxm61xQPW9w r/u/Z3Ie8zGqhH78rKHcy8VB4PSntlLaGMpMFeRvOe0RbygCJ1X5k7pw+dTP7q+USi2syz Z9m328UQPpa+AiKh2PzZAY1rDVKs04Y= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-389-Y1IAhXxZNuKk3UjduWcrDg-1; Mon, 05 Oct 2020 08:16:01 -0400 X-MC-Unique: Y1IAhXxZNuKk3UjduWcrDg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B695D107ACF5; Mon, 5 Oct 2020 12:15:58 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0F7A01A913; Mon, 5 Oct 2020 12:15:51 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , Matthew Wilcox , David Hildenbrand , Alexander Duyck , Vlastimil Babka , Oscar Salvador , Wei Yang , Pankaj Gupta , Michal Hocko , Mel Gorman , Michal Hocko , Dave Hansen , Mike Rapoport Subject: [PATCH v2 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag Date: Mon, 5 Oct 2020 14:15:30 +0200 Message-Id: <20201005121534.15649-2-david@redhat.com> In-Reply-To: <20201005121534.15649-1-david@redhat.com> References: <20201005121534.15649-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Let's prepare for additional flags and avoid long parameter lists of bools. Follow-up patches will also make use of the flags in __free_pages_ok(). Reviewed-by: Alexander Duyck Reviewed-by: Vlastimil Babka Reviewed-by: Oscar Salvador Reviewed-by: Wei Yang Reviewed-by: Pankaj Gupta Acked-by: Michal Hocko Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Signed-off-by: David Hildenbrand --- mm/page_alloc.c | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7012d67a302d..2bf235b1953f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -78,6 +78,22 @@ #include "shuffle.h" #include "page_reporting.h" +/* Free Page Internal flags: for internal, non-pcp variants of free_pages(). */ +typedef int __bitwise fpi_t; + +/* No special request */ +#define FPI_NONE ((__force fpi_t)0) + +/* + * Skip free page reporting notification for the (possibly merged) page. + * This does not hinder free page reporting from grabbing the page, + * reporting it and marking it "reported" - it only skips notifying + * the free page reporting infrastructure about a newly freed page. For + * example, used when temporarily pulling a page from a freelist and + * putting it back unmodified. + */ +#define FPI_SKIP_REPORT_NOTIFY ((__force fpi_t)BIT(0)) + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_FRACTION (8) @@ -952,7 +968,7 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn, static inline void __free_one_page(struct page *page, unsigned long pfn, struct zone *zone, unsigned int order, - int migratetype, bool report) + int migratetype, fpi_t fpi_flags) { struct capture_control *capc = task_capc(zone); unsigned long buddy_pfn; @@ -1039,7 +1055,7 @@ static inline void __free_one_page(struct page *page, add_to_free_list(page, zone, order, migratetype); /* Notify page reporting subsystem of freed page */ - if (report) + if (!(fpi_flags & FPI_SKIP_REPORT_NOTIFY)) page_reporting_notify_free(order); } @@ -1380,7 +1396,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, if (unlikely(isolated_pageblocks)) mt = get_pageblock_migratetype(page); - __free_one_page(page, page_to_pfn(page), zone, 0, mt, true); + __free_one_page(page, page_to_pfn(page), zone, 0, mt, FPI_NONE); trace_mm_page_pcpu_drain(page, 0, mt); } spin_unlock(&zone->lock); @@ -1396,7 +1412,7 @@ static void free_one_page(struct zone *zone, is_migrate_isolate(migratetype))) { migratetype = get_pfnblock_migratetype(page, pfn); } - __free_one_page(page, pfn, zone, order, migratetype, true); + __free_one_page(page, pfn, zone, order, migratetype, FPI_NONE); spin_unlock(&zone->lock); } @@ -3289,7 +3305,8 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt) lockdep_assert_held(&zone->lock); /* Return isolated page to tail of freelist. */ - __free_one_page(page, page_to_pfn(page), zone, order, mt, false); + __free_one_page(page, page_to_pfn(page), zone, order, mt, + FPI_SKIP_REPORT_NOTIFY); } /* From patchwork Mon Oct 5 12:15:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11816529 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 50897112E for ; Mon, 5 Oct 2020 12:16:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2797320848 for ; Mon, 5 Oct 2020 12:16:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BXsKRidq" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726418AbgJEMQZ (ORCPT ); Mon, 5 Oct 2020 08:16:25 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:55680 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725994AbgJEMQV (ORCPT ); Mon, 5 Oct 2020 08:16:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601900179; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tgT/XV/JOTuCj2JcNLuFxyO0AHRuTyQafR/eZPht6eo=; b=BXsKRidq4rwgcn2e0B4XZXKQWWvxZBxbXie5jXcFZ38EJX2P1pDu4bdIPqrMq+oHpdg4fx cZUoMi+2vn0sS8j3o9WVIx71VaXYbuPxUFfC5ood3dK7ZWoIgq5zco8BL5sE8C91kvN0XU s6T3Qk9L2fQAcEpT/oKAB9B0KWvxGOA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-416-a6njIh0COVKiFYMipVIPug-1; Mon, 05 Oct 2020 08:16:16 -0400 X-MC-Unique: a6njIh0COVKiFYMipVIPug-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 13F738030AA; Mon, 5 Oct 2020 12:16:13 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222]) by smtp.corp.redhat.com (Postfix) with ESMTP id B73DF27CC6; Mon, 5 Oct 2020 12:15:59 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , Matthew Wilcox , David Hildenbrand , Alexander Duyck , Oscar Salvador , Wei Yang , Pankaj Gupta , Michal Hocko , Mel Gorman , Michal Hocko , Dave Hansen , Vlastimil Babka , Mike Rapoport , Scott Cheloha , Michael Ellerman Subject: [PATCH v2 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page() Date: Mon, 5 Oct 2020 14:15:31 +0200 Message-Id: <20201005121534.15649-3-david@redhat.com> In-Reply-To: <20201005121534.15649-1-david@redhat.com> References: <20201005121534.15649-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org __putback_isolated_page() already documents that pages will be placed to the tail of the freelist - this is, however, not the case for "order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be the case for all existing users. This change affects two users: - free page reporting - page isolation, when undoing the isolation (including memory onlining). This behavior is desireable for pages that haven't really been touched lately, so exactly the two users that don't actually read/write page content, but rather move untouched pages. The new behavior is especially desirable for memory onlining, where we allow allocation of newly onlined pages via undo_isolate_page_range() in online_pages(). Right now, we always place them to the head of the freelist, resulting in undesireable behavior: Assume we add individual memory chunks via add_memory() and online them right away to the NORMAL zone. We create a dependency chain of unmovable allocations e.g., via the memmap. The memmap of the next chunk will be placed onto previous chunks - if the last block cannot get offlined+removed, all dependent ones cannot get offlined+removed. While this can already be observed with individual DIMMs, it's more of an issue for virtio-mem (and I suspect also ppc DLPAR). Document that this should only be used for optimizations, and no code should rely on this behavior for correction (if the order of the freelists ever changes). We won't care about page shuffling: memory onlining already properly shuffles after onlining. free page reporting doesn't care about physically contiguous ranges, and there are already cases where page isolation will simply move (physically close) free pages to (currently) the head of the freelists via move_freepages_block() instead of shuffling. If this becomes ever relevant, we should shuffle the whole zone when undoing isolation of larger ranges, and after free_contig_range(). Reviewed-by: Alexander Duyck Reviewed-by: Oscar Salvador Reviewed-by: Wei Yang Reviewed-by: Pankaj Gupta Acked-by: Michal Hocko Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Cc: Scott Cheloha Cc: Michael Ellerman Signed-off-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- mm/page_alloc.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2bf235b1953f..df5ff0cd6df1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -94,6 +94,18 @@ typedef int __bitwise fpi_t; */ #define FPI_SKIP_REPORT_NOTIFY ((__force fpi_t)BIT(0)) +/* + * Place the (possibly merged) page to the tail of the freelist. Will ignore + * page shuffling (relevant code - e.g., memory onlining - is expected to + * shuffle the whole zone). + * + * Note: No code should rely on this flag for correctness - it's purely + * to allow for optimizations when handing back either fresh pages + * (memory onlining) or untouched pages (page isolation, free page + * reporting). + */ +#define FPI_TO_TAIL ((__force fpi_t)BIT(1)) + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_FRACTION (8) @@ -1044,7 +1056,9 @@ static inline void __free_one_page(struct page *page, done_merging: set_page_order(page, order); - if (is_shuffle_order(order)) + if (fpi_flags & FPI_TO_TAIL) + to_tail = true; + else if (is_shuffle_order(order)) to_tail = shuffle_pick_tail(); else to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order); @@ -3306,7 +3320,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt) /* Return isolated page to tail of freelist. */ __free_one_page(page, page_to_pfn(page), zone, order, mt, - FPI_SKIP_REPORT_NOTIFY); + FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL); } /* From patchwork Mon Oct 5 12:15:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11816531 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E6394618 for ; Mon, 5 Oct 2020 12:16:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C3D0C208A9 for ; Mon, 5 Oct 2020 12:16:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="L93tfTb5" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726578AbgJEMQ3 (ORCPT ); Mon, 5 Oct 2020 08:16:29 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:53096 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725994AbgJEMQ3 (ORCPT ); Mon, 5 Oct 2020 08:16:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601900187; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=I+9FWa50a4fAVSLereFH2OCTk21D3ec5peVlRys8eQY=; b=L93tfTb5hE1Q1Hhv28B+zxkwkWLgydRno2LUFKpM7WBiPdf1sSVtY6AiLsj2Y6rAjq9JTu tuHYa7jy6WroLdJ32azOj6UAxXtm9jScsNJvI0OjzYfZZEJy1WxAt+6gDOpUcfuMbXvp/o pKRaWeGJcMUiGYaReFUoBQquUJIpdHE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-547-TlvT2dKcMy2qnf7NNmOSuA-1; Mon, 05 Oct 2020 08:16:25 -0400 X-MC-Unique: TlvT2dKcMy2qnf7NNmOSuA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C5087107ACF5; Mon, 5 Oct 2020 12:16:22 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8AAF6271A3; Mon, 5 Oct 2020 12:16:13 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , Matthew Wilcox , David Hildenbrand , Oscar Salvador , Pankaj Gupta , Wei Yang , Alexander Duyck , Mel Gorman , Michal Hocko , Dave Hansen , Vlastimil Babka , Mike Rapoport , Scott Cheloha , Michael Ellerman Subject: [PATCH v2 3/5] mm/page_alloc: move pages to tail in move_to_free_list() Date: Mon, 5 Oct 2020 14:15:32 +0200 Message-Id: <20201005121534.15649-4-david@redhat.com> In-Reply-To: <20201005121534.15649-1-david@redhat.com> References: <20201005121534.15649-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Whenever we move pages between freelists via move_to_free_list()/ move_freepages_block(), we don't actually touch the pages: 1. Page isolation doesn't actually touch the pages, it simply isolates pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist. When undoing isolation, we move the pages back to the target list. 2. Page stealing (steal_suitable_fallback()) moves free pages directly between lists without touching them. 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock() moves free pages directly between freelists without touching them. We already place pages to the tail of the freelists when undoing isolation via __putback_isolated_page(), let's do it in any case (e.g., if order <= pageblock_order) and document the behavior. To simplify, let's move the pages to the tail for all move_to_free_list()/move_freepages_block() users. In 2., the target list is empty, so there should be no change. In 3., we might observe a change, however, highatomic is more concerned about allocations succeeding than cache hotness - if we ever realize this change degrades a workload, we can special-case this instance and add a proper comment. This change results in all pages getting onlined via online_pages() to be placed to the tail of the freelist. Reviewed-by: Oscar Salvador Acked-by: Pankaj Gupta Reviewed-by: Wei Yang Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Cc: Scott Cheloha Cc: Michael Ellerman Signed-off-by: David Hildenbrand Acked-by: Michal Hocko Reviewed-by: Vlastimil Babka --- mm/page_alloc.c | 10 +++++++--- mm/page_isolation.c | 5 +++++ 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index df5ff0cd6df1..b187e46cf640 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -901,13 +901,17 @@ static inline void add_to_free_list_tail(struct page *page, struct zone *zone, area->nr_free++; } -/* Used for pages which are on another list */ +/* + * Used for pages which are on another list. Move the pages to the tail + * of the list - so the moved pages won't immediately be considered for + * allocation again (e.g., optimization for memory onlining). + */ static inline void move_to_free_list(struct page *page, struct zone *zone, unsigned int order, int migratetype) { struct free_area *area = &zone->free_area[order]; - list_move(&page->lru, &area->free_list[migratetype]); + list_move_tail(&page->lru, &area->free_list[migratetype]); } static inline void del_page_from_free_list(struct page *page, struct zone *zone, @@ -2340,7 +2344,7 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone, #endif /* - * Move the free pages in a range to the free lists of the requested type. + * Move the free pages in a range to the freelist tail of the requested type. * Note that start_page and end_pages are not aligned on a pageblock * boundary. If alignment is required, use move_freepages_block() */ diff --git a/mm/page_isolation.c b/mm/page_isolation.c index abfe26ad59fd..83692b937784 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -106,6 +106,11 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype) * If we isolate freepage with more than pageblock_order, there * should be no freepage in the range, so we could avoid costly * pageblock scanning for freepage moving. + * + * We didn't actually touch any of the isolated pages, so place them + * to the tail of the freelist. This is an optimization for memory + * onlining - just onlined memory won't immediately be considered for + * allocation. */ if (!isolated_page) { nr_pages = move_freepages_block(zone, page, migratetype, NULL); From patchwork Mon Oct 5 12:15:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11816555 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5AFB9112E for ; Mon, 5 Oct 2020 12:16:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3406520848 for ; Mon, 5 Oct 2020 12:16:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bYhz0kz2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726667AbgJEMQr (ORCPT ); Mon, 5 Oct 2020 08:16:47 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:29541 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726658AbgJEMQp (ORCPT ); Mon, 5 Oct 2020 08:16:45 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601900203; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=I03+HqTTiEYZ2UJ8Hm8w9UAbNI08TTVr15820ukCo2o=; b=bYhz0kz2ZcyyM44a03XMforf5N6P+WtGo27Ll8OJ2FhUNVOHYZ2Yw78AuzisI0Pkt4kQxh 3xHCSGBxn0swFpFRgBdCWfavJ64MJQlenmrHDviaEF8LIzVhgUeFjpUuDTMFkQUoyhgNmm Wk+uHLLx/Aq2pegBlOi55qdvTixbnhg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-536-hdgD0PM4N7SehO3ysGnyiw-1; Mon, 05 Oct 2020 08:16:39 -0400 X-MC-Unique: hdgD0PM4N7SehO3ysGnyiw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 547CE18A8220; Mon, 5 Oct 2020 12:16:36 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222]) by smtp.corp.redhat.com (Postfix) with ESMTP id C518D1A8EC; Mon, 5 Oct 2020 12:16:23 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , Matthew Wilcox , David Hildenbrand , Vlastimil Babka , Oscar Salvador , Pankaj Gupta , Wei Yang , Michal Hocko , Alexander Duyck , Mel Gorman , Michal Hocko , Dave Hansen , Mike Rapoport , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu Subject: [PATCH v2 4/5] mm/page_alloc: place pages to tail in __free_pages_core() Date: Mon, 5 Oct 2020 14:15:33 +0200 Message-Id: <20201005121534.15649-5-david@redhat.com> In-Reply-To: <20201005121534.15649-1-david@redhat.com> References: <20201005121534.15649-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org __free_pages_core() is used when exposing fresh memory to the buddy during system boot and when onlining memory in generic_online_page(). generic_online_page() is used in two cases: 1. Direct memory onlining in online_pages(). 2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV balloon and virtio-mem), when parts of a section are kept fake-offline to be fake-onlined later on. In 1, we already place pages to the tail of the freelist. Pages will be freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists via undo_isolate_page_range(). In 2, we currently don't implement a proper rule. In case of virtio-mem, where we currently always online MAX_ORDER - 1 pages, the pages will be placed to the HEAD of the freelist - undesireable. While the hyper-v balloon calls generic_online_page() with single pages, usually it will call it on successive single pages in a larger block. The pages are fresh, so place them to the tail of the freelist and avoid the PCP. In __free_pages_core(), remove the now superflouos call to set_page_refcounted() and add a comment regarding page initialization and the refcount. Note: In 2. we currently don't shuffle. If ever relevant (page shuffling is usually of limited use in virtualized environments), we might want to shuffle after a sequence of generic_online_page() calls in the relevant callers. Reviewed-by: Vlastimil Babka Reviewed-by: Oscar Salvador Acked-by: Pankaj Gupta Reviewed-by: Wei Yang Acked-by: Michal Hocko Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Cc: "K. Y. Srinivasan" Cc: Haiyang Zhang Cc: Stephen Hemminger Cc: Wei Liu Signed-off-by: David Hildenbrand --- mm/page_alloc.c | 33 +++++++++++++++++++++++---------- 1 file changed, 23 insertions(+), 10 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b187e46cf640..3dadcc6d4009 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -275,7 +275,8 @@ bool pm_suspended_storage(void) unsigned int pageblock_order __read_mostly; #endif -static void __free_pages_ok(struct page *page, unsigned int order); +static void __free_pages_ok(struct page *page, unsigned int order, + fpi_t fpi_flags); /* * results with 256, 32 in the lowmem_reserve sysctl: @@ -687,7 +688,7 @@ static void bad_page(struct page *page, const char *reason) void free_compound_page(struct page *page) { mem_cgroup_uncharge(page); - __free_pages_ok(page, compound_order(page)); + __free_pages_ok(page, compound_order(page), FPI_NONE); } void prep_compound_page(struct page *page, unsigned int order) @@ -1423,14 +1424,14 @@ static void free_pcppages_bulk(struct zone *zone, int count, static void free_one_page(struct zone *zone, struct page *page, unsigned long pfn, unsigned int order, - int migratetype) + int migratetype, fpi_t fpi_flags) { spin_lock(&zone->lock); if (unlikely(has_isolate_pageblock(zone) || is_migrate_isolate(migratetype))) { migratetype = get_pfnblock_migratetype(page, pfn); } - __free_one_page(page, pfn, zone, order, migratetype, FPI_NONE); + __free_one_page(page, pfn, zone, order, migratetype, fpi_flags); spin_unlock(&zone->lock); } @@ -1508,7 +1509,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end) } } -static void __free_pages_ok(struct page *page, unsigned int order) +static void __free_pages_ok(struct page *page, unsigned int order, + fpi_t fpi_flags) { unsigned long flags; int migratetype; @@ -1520,7 +1522,8 @@ static void __free_pages_ok(struct page *page, unsigned int order) migratetype = get_pfnblock_migratetype(page, pfn); local_irq_save(flags); __count_vm_events(PGFREE, 1 << order); - free_one_page(page_zone(page), page, pfn, order, migratetype); + free_one_page(page_zone(page), page, pfn, order, migratetype, + fpi_flags); local_irq_restore(flags); } @@ -1530,6 +1533,11 @@ void __free_pages_core(struct page *page, unsigned int order) struct page *p = page; unsigned int loop; + /* + * When initializing the memmap, __init_single_page() sets the refcount + * of all pages to 1 ("allocated"/"not free"). We have to set the + * refcount of all involved pages to 0. + */ prefetchw(p); for (loop = 0; loop < (nr_pages - 1); loop++, p++) { prefetchw(p + 1); @@ -1540,8 +1548,12 @@ void __free_pages_core(struct page *page, unsigned int order) set_page_count(p, 0); atomic_long_add(nr_pages, &page_zone(page)->managed_pages); - set_page_refcounted(page); - __free_pages(page, order); + + /* + * Bypass PCP and place fresh pages right to the tail, primarily + * relevant for memory onlining. + */ + __free_pages_ok(page, order, FPI_TO_TAIL); } #ifdef CONFIG_NEED_MULTIPLE_NODES @@ -3168,7 +3180,8 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn) */ if (migratetype >= MIGRATE_PCPTYPES) { if (unlikely(is_migrate_isolate(migratetype))) { - free_one_page(zone, page, pfn, 0, migratetype); + free_one_page(zone, page, pfn, 0, migratetype, + FPI_NONE); return; } migratetype = MIGRATE_MOVABLE; @@ -4991,7 +5004,7 @@ static inline void free_the_page(struct page *page, unsigned int order) if (order == 0) /* Via pcp? */ free_unref_page(page); else - __free_pages_ok(page, order); + __free_pages_ok(page, order, FPI_NONE); } void __free_pages(struct page *page, unsigned int order) From patchwork Mon Oct 5 12:15:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11816559 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9A3BC112E for ; Mon, 5 Oct 2020 12:17:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 781B72078D for ; Mon, 5 Oct 2020 12:17:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YZdfR0lj" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726147AbgJEMQ6 (ORCPT ); Mon, 5 Oct 2020 08:16:58 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:40781 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726695AbgJEMQt (ORCPT ); Mon, 5 Oct 2020 08:16:49 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601900207; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0AsZOi2g9AlkyekUiDSDSjeyhdmKjvuUsI97IJKTuTs=; b=YZdfR0ljpM7fzBrBABmRoUzWrkLAfLXoXdiax/g0ahk33jfi1DMlQhgA9eA9e2o5Ie526H U9Fwljtlf16ESs6e8BiWCKAXeZjpju+2/YwoESI/w75nP1ksgqfKIY5gbDSI2/uLk/39Ff sLOBsn1uH0WkdEGG0UTxhp77lUq4lFI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-70-7gZGHkSGNu2LzffQTpddTg-1; Mon, 05 Oct 2020 08:16:45 -0400 X-MC-Unique: 7gZGHkSGNu2LzffQTpddTg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 887B4801AB1; Mon, 5 Oct 2020 12:16:43 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222]) by smtp.corp.redhat.com (Postfix) with ESMTP id A40231A8EC; Mon, 5 Oct 2020 12:16:36 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , Matthew Wilcox , David Hildenbrand , Wei Yang , Michal Hocko , Alexander Duyck , Mel Gorman , Michal Hocko , Dave Hansen , Vlastimil Babka , Oscar Salvador , Mike Rapoport , Pankaj Gupta Subject: [PATCH v2 5/5] mm/memory_hotplug: update comment regarding zone shuffling Date: Mon, 5 Oct 2020 14:15:34 +0200 Message-Id: <20201005121534.15649-6-david@redhat.com> In-Reply-To: <20201005121534.15649-1-david@redhat.com> References: <20201005121534.15649-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org As we no longer shuffle via generic_online_page() and when undoing isolation, we can simplify the comment. We now effectively shuffle only once (properly) when onlining new memory. Reviewed-by: Wei Yang Acked-by: Michal Hocko Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Cc: Pankaj Gupta Signed-off-by: David Hildenbrand Acked-by: Vlastimil Babka Acked-by: Pankaj Gupta --- mm/memory_hotplug.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 03a00cb68bf7..b44d4c7ba73b 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -858,13 +858,10 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE); /* - * When exposing larger, physically contiguous memory areas to the - * buddy, shuffling in the buddy (when freeing onlined pages, putting - * them either to the head or the tail of the freelist) is only helpful - * for maintaining the shuffle, but not for creating the initial - * shuffle. Shuffle the whole zone to make sure the just onlined pages - * are properly distributed across the whole freelist. Make sure to - * shuffle once pageblocks are no longer isolated. + * Freshly onlined pages aren't shuffled (e.g., all pages are placed to + * the tail of the freelist when undoing isolation). Shuffle the whole + * zone to make sure the just onlined pages are properly distributed + * across the whole freelist - to create an initial shuffle. */ shuffle_zone(zone);