From patchwork Tue Oct 1 15:29:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Duyck X-Patchwork-Id: 11169061 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7E8961709 for ; Tue, 1 Oct 2019 15:29:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3F0A32133F for ; Tue, 1 Oct 2019 15:29:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Yi/BDv/R" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389633AbfJAP3h (ORCPT ); Tue, 1 Oct 2019 11:29:37 -0400 Received: from mail-pg1-f196.google.com ([209.85.215.196]:34799 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389508AbfJAP3h (ORCPT ); Tue, 1 Oct 2019 11:29:37 -0400 Received: by mail-pg1-f196.google.com with SMTP id y35so9915336pgl.1; Tue, 01 Oct 2019 08:29:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=e8YeMjhQY+ijjBNTlIg7BQh1fK0Fplybk8YAB8OnQhk=; b=Yi/BDv/Riwq60MtS+nhYnUAZbEBYBpv4V/gv2Sg59zkBgac3o41ROVKByPDLT4cr/k aFqafrNev5S7BVhjY3wBcL1wnF8fg5y/6+p1Bm3BwNd76igQVMCFXcwNk+FNkk4xCwG0 OD7auHpQZzB6nSIVImk2t6G6iS0Ynf2f1MVsVLrXmAXtN5YkW7BfZCH+CVjRjWlogCch mmkYpBm4rFD6DX5lwqrYczVFap2NMgbmTTz8Ipb172XzrpU2jWohUZbyeMvln5V1Wu9s 05TVM2bwbO5iRE2R6d8pd3lliieVxHQJeYQi5nef8BwgyxCVH4T7NHwSC8h/6WypybA8 ltfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:from:to:cc:date:message-id:in-reply-to :references:user-agent:mime-version:content-transfer-encoding; bh=e8YeMjhQY+ijjBNTlIg7BQh1fK0Fplybk8YAB8OnQhk=; b=KOs4TyIwCNF6Kg18qzPnWaBLKIJnCxq1Uq1TyLzV4wn3vAzZSjKkrDFkL6r690s2Tg uUTg/Rh8bBSImGSxiY1jZMFZbtgdgFbmQqANYo+zrXnZlV3MvyrQEc4MQZDvFzUO+E3o XflCc1B9B4Few+vkJz7mWDzqbJnWPs3dBMvudZ57KbjzGQv8cvvtBYvZx28Yv+6qxH3G caN8hYeHR8LjcvPRkztsOTY0SLpXMmDQVC5/ZnqG7EY9jFsI73omqDDspIbxpxgkkiaY k/Hm8rsChu+sCKQslID2rv2cMc64GGQKY3xGmXPsgxQmNuX/xY0uLycIcO/cVhZ+0LVk mTMA== X-Gm-Message-State: APjAAAWVky7ELtDoMJBmeFGX7+3vg5U9ScSz7nTIMQHpA1bDRyT4kRhV TqcY8OwfWw8+bpyrJ1qK2Kc= X-Google-Smtp-Source: APXvYqxKk7zenPylPml2NcM0zdxPBidkQo9Pk9IxW7lap16R+taI0B25jkheM3hK2MbvxlXXF0NruQ== X-Received: by 2002:a17:90a:170e:: with SMTP id z14mr6197617pjd.119.1569943776312; Tue, 01 Oct 2019 08:29:36 -0700 (PDT) Received: from localhost.localdomain ([2001:470:b:9c3:9e5c:8eff:fe4f:f2d0]) by smtp.gmail.com with ESMTPSA id i37sm3032668pje.23.2019.10.01.08.29.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Oct 2019 08:29:35 -0700 (PDT) Subject: [PATCH v11 3/6] mm: Introduce Reported pages From: Alexander Duyck To: virtio-dev@lists.oasis-open.org, kvm@vger.kernel.org, mst@redhat.com, david@redhat.com, dave.hansen@intel.com, linux-kernel@vger.kernel.org, willy@infradead.org, mhocko@kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, vbabka@suse.cz, osalvador@suse.de Cc: yang.zhang.wz@gmail.com, pagupta@redhat.com, konrad.wilk@oracle.com, nitesh@redhat.com, riel@surriel.com, lcapitulino@redhat.com, wei.w.wang@intel.com, aarcange@redhat.com, pbonzini@redhat.com, dan.j.williams@intel.com, alexander.h.duyck@linux.intel.com Date: Tue, 01 Oct 2019 08:29:35 -0700 Message-ID: <20191001152934.27008.14328.stgit@localhost.localdomain> In-Reply-To: <20191001152441.27008.99285.stgit@localhost.localdomain> References: <20191001152441.27008.99285.stgit@localhost.localdomain> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Alexander Duyck In order to pave the way for free page reporting in virtualized environments we will need a way to get pages out of the free lists and identify those pages after they have been returned. To accomplish this, this patch adds the concept of a Reported Buddy, which is essentially meant to just be the Uptodate flag used in conjunction with the Buddy page type. It adds a set of pointers we shall call "reported_boundary" which represent the upper boundary between the unreported and reported pages. The general idea is that in order for a page to cross from one side of the boundary to the other it will need to verify that it went through the reporting process. Ultimately a free list has been fully processed when the boundary has been moved from the tail all they way up to occupying the first entry in the list. Without this we would have to manually walk the entire page list until we have find a page that hasn't been reported. In my testing this adds as much as 18% additional overhead which would make this unattractive as a solution. One limitation to this approach is that it is essentially a linear search and in the case of the free lists we can have pages added to either the head or the tail of the list. In order to place limits on this we only allow pages to be added before the reported_boundary instead of adding to the tail itself. An added advantage to this approach is that we should be reducing the overall memory footprint of the guest as it will be more likely to recycle warm pages versus trying to allocate the reported pages that were likely evicted from the guest memory. Since we will only be reporting one zone at a time we keep the boundary limited to being defined for just the zone we are currently reporting pages from. Doing this we can keep the number of additional pointers needed quite small. To flag that the boundaries are in place we use a single bit in the zone to indicate that reporting and the boundaries are active. We store the index of the boundary pointer used to track the reported page in the page->index value. Doing this we can avoid unnecessary computation to determine the index value again. There should be no issues with this as the value is unused when the page is in the buddy allocator, and is reset as soon as the page is removed from the free list. Signed-off-by: Alexander Duyck --- include/linux/mmzone.h | 16 ++++ include/linux/page-flags.h | 11 +++ mm/Kconfig | 11 +++ mm/compaction.c | 5 + mm/memory_hotplug.c | 2 + mm/page_alloc.c | 67 +++++++++++++++-- mm/page_reporting.h | 176 ++++++++++++++++++++++++++++++++++++++++++++ 7 files changed, 281 insertions(+), 7 deletions(-) create mode 100644 mm/page_reporting.h diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 270a7b493174..53922c30b8d8 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -463,6 +463,14 @@ struct zone { seqlock_t span_seqlock; #endif +#ifdef CONFIG_PAGE_REPORTING + /* + * Pointer to reported page tracking statistics array. The size of + * the array is MAX_ORDER - PAGE_REPORTING_MIN_ORDER. NULL when + * unused page reporting is not present. + */ + unsigned long *reported_pages; +#endif int initialized; /* Write-intensive fields used from the page allocator */ @@ -538,6 +546,14 @@ enum zone_flags { ZONE_BOOSTED_WATERMARK, /* zone recently boosted watermarks. * Cleared when kswapd is woken. */ + ZONE_PAGE_REPORTING_ACTIVE, /* zone enabled page reporting and is + * activly flushing the data out of + * higher order pages. + */ + ZONE_PAGE_REPORTING_REQUESTED, /* zone enabled page reporting and has + * requested flushing the data out of + * higher order pages. + */ }; static inline unsigned long zone_managed_pages(struct zone *zone) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index f91cb8898ff0..759a3b3956f2 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -163,6 +163,9 @@ enum pageflags { /* non-lru isolated movable page */ PG_isolated = PG_reclaim, + + /* Buddy pages. Used to track which pages have been reported */ + PG_reported = PG_uptodate, }; #ifndef __GENERATING_BOUNDS_H @@ -432,6 +435,14 @@ static inline bool set_hwpoison_free_buddy_page(struct page *page) #endif /* + * PageReported() is used to track reported free pages within the Buddy + * allocator. We can use the non-atomic version of the test and set + * operations as both should be shielded with the zone lock to prevent + * any possible races on the setting or clearing of the bit. + */ +__PAGEFLAG(Reported, reported, PF_NO_COMPOUND) + +/* * On an anonymous page mapped into a user virtual memory area, * page->mapping points to its anon_vma, not to a struct address_space; * with the PAGE_MAPPING_ANON bit set to distinguish it. See rmap.h. diff --git a/mm/Kconfig b/mm/Kconfig index a5dae9a7eb51..0419b2a9be3e 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -237,6 +237,17 @@ config COMPACTION linux-mm@kvack.org. # +# support for unused page reporting +config PAGE_REPORTING + bool "Allow for reporting of unused pages" + def_bool n + help + Unused page reporting allows for the incremental acquisition of + unused pages from the buddy allocator for the purpose of reporting + those pages to another entity, such as a hypervisor, so that the + memory can be freed up for other uses. + +# # support for page migration # config MIGRATION diff --git a/mm/compaction.c b/mm/compaction.c index ce08b39d85d4..60e064330b3a 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -24,6 +24,7 @@ #include #include #include "internal.h" +#include "page_reporting.h" #ifdef CONFIG_COMPACTION static inline void count_compact_event(enum vm_event_item item) @@ -1325,6 +1326,8 @@ static int next_search_order(struct compact_control *cc, int order) continue; spin_lock_irqsave(&cc->zone->lock, flags); + page_reporting_free_area_release(cc->zone, order, + MIGRATE_MOVABLE); freelist = &area->free_list[MIGRATE_MOVABLE]; list_for_each_entry_reverse(freepage, freelist, lru) { unsigned long pfn; @@ -1681,6 +1684,8 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc) continue; spin_lock_irqsave(&cc->zone->lock, flags); + page_reporting_free_area_release(cc->zone, order, + MIGRATE_MOVABLE); freelist = &area->free_list[MIGRATE_MOVABLE]; list_for_each_entry(freepage, freelist, lru) { unsigned long free_pfn; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 680b4b3e57d9..be9634819218 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -41,6 +41,7 @@ #include "internal.h" #include "shuffle.h" +#include "page_reporting.h" /* * online_page_callback contains pointer to current page onlining function. @@ -1624,6 +1625,7 @@ static int __ref __offline_pages(unsigned long start_pfn, if (!populated_zone(zone)) { zone_pcp_reset(zone); build_all_zonelists(NULL); + page_reporting_reset_zone(zone); } else zone_pcp_update(zone); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5e142047f730..c82c00ea1f5c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -74,6 +74,7 @@ #include #include "internal.h" #include "shuffle.h" +#include "page_reporting.h" /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); @@ -891,10 +892,15 @@ static inline void add_to_free_list(struct page *page, struct zone *zone, static inline void add_to_free_list_tail(struct page *page, struct zone *zone, unsigned int order, int migratetype) { - struct free_area *area = &zone->free_area[order]; + struct list_head *tail = get_unreported_tail(zone, order, migratetype); - list_add_tail(&page->lru, &area->free_list[migratetype]); - area->nr_free++; + /* + * To prevent the unreported pages from slipping behind our iterator + * we will force them to be inserted in front of it. By doing this + * we should only need to make one pass through the freelist. + */ + list_add_tail(&page->lru, tail); + zone->free_area[order].nr_free++; } /* Used for pages which are on another list */ @@ -903,12 +909,20 @@ static inline void move_to_free_list(struct page *page, struct zone *zone, { struct free_area *area = &zone->free_area[order]; + /* Make certain the page isn't occupying the boundary */ + if (page_is_reported(page)) + __del_page_from_reported_list(page, zone); + list_move(&page->lru, &area->free_list[migratetype]); } static inline void del_page_from_free_list(struct page *page, struct zone *zone, unsigned int order) { + /* remove page from reported list, and clear reported state */ + if (page_is_reported(page)) + del_page_from_reported_list(page, zone, order); + list_del(&page->lru); __ClearPageBuddy(page); set_page_private(page, 0); @@ -972,7 +986,7 @@ static inline void del_page_from_free_list(struct page *page, struct zone *zone, static inline void __free_one_page(struct page *page, unsigned long pfn, struct zone *zone, unsigned int order, - int migratetype) + int migratetype, bool reported) { struct capture_control *capc = task_capc(zone); unsigned long uninitialized_var(buddy_pfn); @@ -1048,7 +1062,9 @@ static inline void __free_one_page(struct page *page, done_merging: set_page_order(page, order); - if (is_shuffle_order(order)) + if (reported) + to_tail = true; + else if (is_shuffle_order(order)) to_tail = shuffle_pick_tail(); else to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order); @@ -1367,7 +1383,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, if (unlikely(isolated_pageblocks)) mt = get_pageblock_migratetype(page); - __free_one_page(page, page_to_pfn(page), zone, 0, mt); + __free_one_page(page, page_to_pfn(page), zone, 0, mt, false); trace_mm_page_pcpu_drain(page, 0, mt); } spin_unlock(&zone->lock); @@ -1383,7 +1399,7 @@ static void free_one_page(struct zone *zone, is_migrate_isolate(migratetype))) { migratetype = get_pfnblock_migratetype(page, pfn); } - __free_one_page(page, pfn, zone, order, migratetype); + __free_one_page(page, pfn, zone, order, migratetype, false); spin_unlock(&zone->lock); } @@ -2245,6 +2261,43 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, return NULL; } +#ifdef CONFIG_PAGE_REPORTING +struct list_head **reported_boundary __read_mostly; + +/** + * free_reported_page - Return a now-reported page back where we got it + * @page: Page that was reported + * @order: Order of the reported page + * + * This function will pull the migratetype and order information out + * of the page and attempt to return it where it found it. If the page + * is added to the free list without changes we will mark it as being + * reported. + */ +void free_reported_page(struct page *page, unsigned int order) +{ + struct zone *zone = page_zone(page); + unsigned long pfn; + unsigned int mt; + + /* zone lock should be held when this function is called */ + lockdep_assert_held(&zone->lock); + + pfn = page_to_pfn(page); + mt = get_pfnblock_migratetype(page, pfn); + __free_one_page(page, pfn, zone, order, mt, true); + + /* + * If page was not comingled with another page we can consider + * the result to be "reported" since part of the page hasn't been + * modified, otherwise we would need to report on the new larger + * page. + */ + if (PageBuddy(page) && page_order(page) == order) + add_page_to_reported_list(page, zone, order, mt); +} +#endif /* CONFIG_PAGE_REPORTING */ + /* * This array describes the order lists are fallen back to when * the free lists for the desirable migrate type are depleted diff --git a/mm/page_reporting.h b/mm/page_reporting.h new file mode 100644 index 000000000000..ee4d86daa089 --- /dev/null +++ b/mm/page_reporting.h @@ -0,0 +1,176 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _MM_PAGE_REPORTING_H +#define _MM_PAGE_REPORTING_H + +#include +#include +#include +#include +#include +#include + +#define PAGE_REPORTING_MIN_ORDER pageblock_order + +#ifdef CONFIG_PAGE_REPORTING +/* Reported page accessors, defined in page_alloc.c */ +void free_reported_page(struct page *page, unsigned int order); + +#define page_is_reported(_page) unlikely(PageReported(_page)) + +/* Free reported_pages and reset reported page tracking count to 0 */ +static inline void page_reporting_reset_zone(struct zone *zone) +{ + kfree(zone->reported_pages); + zone->reported_pages = NULL; +} + +/* Boundary functions */ +static inline pgoff_t +get_reporting_index(unsigned int order, unsigned int migratetype) +{ + /* + * We will only ever be dealing with pages greater-than or equal to + * PAGE_REPORTING_MIN_ORDER. Since that is the case we can avoid + * allocating unused space by limiting our index range to only the + * orders that are supported for page reporting. + */ + return (order - PAGE_REPORTING_MIN_ORDER) * MIGRATE_TYPES + migratetype; +} + +extern struct list_head **reported_boundary __read_mostly; + +static inline void +page_reporting_reset_boundary(struct zone *zone, unsigned int order, int mt) +{ + int index; + + if (order < PAGE_REPORTING_MIN_ORDER) + return; + if (!test_bit(ZONE_PAGE_REPORTING_ACTIVE, &zone->flags)) + return; + + index = get_reporting_index(order, mt); + reported_boundary[index] = &zone->free_area[order].free_list[mt]; +} + +static inline void page_reporting_disable_boundaries(struct zone *zone) +{ + /* zone lock should be held when this function is called */ + lockdep_assert_held(&zone->lock); + + __clear_bit(ZONE_PAGE_REPORTING_ACTIVE, &zone->flags); +} + +static inline void +page_reporting_free_area_release(struct zone *zone, unsigned int order, int mt) +{ + page_reporting_reset_boundary(zone, order, mt); +} + +/* + * Method for obtaining the tail of the free list. Using this allows for + * tail insertions of unreported pages into the region that is currently + * being scanned so as to avoid interleaving reported and unreported pages. + */ +static inline struct list_head * +get_unreported_tail(struct zone *zone, unsigned int order, int migratetype) +{ + if (order >= PAGE_REPORTING_MIN_ORDER && + test_bit(ZONE_PAGE_REPORTING_ACTIVE, &zone->flags)) + return reported_boundary[get_reporting_index(order, + migratetype)]; + + return &zone->free_area[order].free_list[migratetype]; +} + +/* + * Functions for adding/removing reported pages to the freelist. + * All of them expect the zone lock to be held to maintain + * consistency of the reported list as a subset of the free list. + */ +static inline void +add_page_to_reported_list(struct page *page, struct zone *zone, + unsigned int order, unsigned int mt) +{ + /* + * Default to using index 0, this will be updated later if the zone + * is still being processed. + */ + page->index = 0; + + /* flag page as reported */ + __SetPageReported(page); + + /* update areated page accounting */ + zone->reported_pages[order - PAGE_REPORTING_MIN_ORDER]++; +} + +static inline void page_reporting_pull_boundary(struct page *page) +{ + struct list_head **tail = &reported_boundary[page->index]; + + if (*tail == &page->lru) + *tail = page->lru.next; +} + +static inline void +__del_page_from_reported_list(struct page *page, struct zone *zone) +{ + /* + * Since the page is being pulled from the list we need to update + * the boundary, after that we can just update the index so that + * the correct boundary will be checked in the future. + */ + if (test_bit(ZONE_PAGE_REPORTING_ACTIVE, &zone->flags)) + page_reporting_pull_boundary(page); +} + +static inline void +del_page_from_reported_list(struct page *page, struct zone *zone, + unsigned int order) +{ + __del_page_from_reported_list(page, zone); + + /* page_private will contain the page order, so just use it directly */ + zone->reported_pages[order - PAGE_REPORTING_MIN_ORDER]--; + + /* clear the flag so we can report on it when it returns */ + __ClearPageReported(page); +} + +#else /* CONFIG_PAGE_REPORTING */ +#define page_is_reported(_page) false + +static inline void page_reporting_reset_zone(struct zone *zone) +{ +} + +static inline void +page_reporting_free_area_release(struct zone *zone, unsigned int order, int mt) +{ +} + +static inline struct list_head * +get_unreported_tail(struct zone *zone, unsigned int order, int migratetype) +{ + return &zone->free_area[order].free_list[migratetype]; +} + +static inline void +add_page_to_reported_list(struct page *page, struct zone *zone, + int order, int migratetype) +{ +} + +static inline void +__del_page_from_reported_list(struct page *page, struct zone *zone) +{ +} + +static inline void +del_page_from_reported_list(struct page *page, struct zone *zone, + unsigned int order) +{ +} +#endif /* CONFIG_PAGE_REPORTING */ +#endif /*_MM_PAGE_REPORTING_H */