From patchwork Thu Dec 5 16:22:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Duyck X-Patchwork-Id: 11275151 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 099CC14BD for ; Thu, 5 Dec 2019 16:22:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D1B8524249 for ; Thu, 5 Dec 2019 16:22:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="uIfAEHq6" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729802AbfLEQWS (ORCPT ); Thu, 5 Dec 2019 11:22:18 -0500 Received: from mail-qv1-f65.google.com ([209.85.219.65]:33446 "EHLO mail-qv1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729535AbfLEQWS (ORCPT ); Thu, 5 Dec 2019 11:22:18 -0500 Received: by mail-qv1-f65.google.com with SMTP id z3so1511428qvn.0; Thu, 05 Dec 2019 08:22:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=vRTdXLGlIRX6PGjWF3qAFy+h5lzWEe6Gf9rZrQkRSH8=; b=uIfAEHq6JimSlQBcMBhmaULOeAD7Vvh2X5L8lzGiyu0Ry/6Y5+C9JT+PaXCVpSLCt6 zjTTboECeXwVD3xXnTWKQR+Uov+X64pLr7ZvtqGeBdO6YAC6Ew/iZmGl5mkPaog5aGHG XP6RJj1ToWmY7/SPzrWt7bdhoBO3lZcdg7mkZfakQgQMOZnjrahVu8MdtoNwQjABMjrw /oaS/28gV7xznVBmNuHblADEyHwIzt7DztrJ2GsUyWvJQDFPRVljl038STMkoV0MCSyn LQwjYJbk49Rsjtn1QNBYQikbK00xS0gVRY2z/GhkfPkgKXqnjFN5/OH5MmsZvrnlcoo2 J/7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:from:to:cc:date:message-id:in-reply-to :references:user-agent:mime-version:content-transfer-encoding; bh=vRTdXLGlIRX6PGjWF3qAFy+h5lzWEe6Gf9rZrQkRSH8=; b=seQ8UsGxxTnTczlpYMV8DekHLFsUmY+qFdPW4LPPeolcrWwM6TfDxUPzYWyuLpHwVc ZIeGWbA9MWZvkD6UWC8pS/wx+4wrJWpfAse9BYRly8rYfNO2mIKJlHjFCLBTs6Gnh2BK FTHayq5ASAGFySxlnM9fY7xD7k8ZgBzwDQ8+18fU4tstQVb+HpUDQC3b7UBiQQV0TbCZ TVtuh77ID5hyAQUquLWh9l7BkFrCyapYQ+iFtWml02ucFZjxehHiaOYS9AWAQZSC7Xcl PacMGgHOIsL+NPtzQ5l+WCKQ65wGk0ZjPDVyWE/7I+LZCTAT6vOBMzhEEOCotsx2I+zU ZAOg== X-Gm-Message-State: APjAAAVvQTwI7zw0zX21VKM7orBWLtaanTp5hXwaZPQK0UUDljPGAPkC RVpVahMj+tmAK/8298Q2ENQ= X-Google-Smtp-Source: APXvYqwnGSyW1+3kT6T8to6/Da1ka4NOZRDmwiSEfwCbqd5xsIDbQSsXest4DBdRS+z9vmXboVV6Ag== X-Received: by 2002:a05:6214:38c:: with SMTP id l12mr8254725qvy.224.1575562936266; Thu, 05 Dec 2019 08:22:16 -0800 (PST) Received: from localhost.localdomain ([2001:470:b:9c3:9e5c:8eff:fe4f:f2d0]) by smtp.gmail.com with ESMTPSA id t2sm5038677qkc.31.2019.12.05.08.22.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Dec 2019 08:22:15 -0800 (PST) Subject: [PATCH v15 1/7] mm: Adjust shuffle code to allow for future coalescing From: Alexander Duyck To: kvm@vger.kernel.org, mst@redhat.com, linux-kernel@vger.kernel.org, willy@infradead.org, mhocko@kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, vbabka@suse.cz Cc: yang.zhang.wz@gmail.com, nitesh@redhat.com, konrad.wilk@oracle.com, david@redhat.com, pagupta@redhat.com, riel@surriel.com, lcapitulino@redhat.com, dave.hansen@intel.com, wei.w.wang@intel.com, aarcange@redhat.com, pbonzini@redhat.com, dan.j.williams@intel.com, alexander.h.duyck@linux.intel.com, osalvador@suse.de Date: Thu, 05 Dec 2019 08:22:12 -0800 Message-ID: <20191205162212.19548.68878.stgit@localhost.localdomain> In-Reply-To: <20191205161928.19548.41654.stgit@localhost.localdomain> References: <20191205161928.19548.41654.stgit@localhost.localdomain> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Alexander Duyck Move the head/tail adding logic out of the shuffle code and into the __free_one_page function since ultimately that is where it is really needed anyway. By doing this we should be able to reduce the overhead and can consolidate all of the list addition bits in one spot. Acked-by: David Hildenbrand Reviewed-by: Dan Williams Signed-off-by: Alexander Duyck --- include/linux/mmzone.h | 12 -------- mm/page_alloc.c | 71 ++++++++++++++++++++++++++++-------------------- mm/shuffle.c | 12 ++++---- mm/shuffle.h | 6 ++++ 4 files changed, 54 insertions(+), 47 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 89d8ff06c9ce..245010b24747 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -116,18 +116,6 @@ static inline void add_to_free_area_tail(struct page *page, struct free_area *ar area->nr_free++; } -#ifdef CONFIG_SHUFFLE_PAGE_ALLOCATOR -/* Used to preserve page allocation order entropy */ -void add_to_free_area_random(struct page *page, struct free_area *area, - int migratetype); -#else -static inline void add_to_free_area_random(struct page *page, - struct free_area *area, int migratetype) -{ - add_to_free_area(page, area, migratetype); -} -#endif - /* Used for pages which are on another list */ static inline void move_to_free_area(struct page *page, struct free_area *area, int migratetype) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 62dcd6b76c80..32e9cc092656 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -878,6 +878,36 @@ static inline struct capture_control *task_capc(struct zone *zone) #endif /* CONFIG_COMPACTION */ /* + * If this is not the largest possible page, check if the buddy + * of the next-highest order is free. If it is, it's possible + * that pages are being freed that will coalesce soon. In case, + * that is happening, add the free page to the tail of the list + * so it's less likely to be used soon and more likely to be merged + * as a higher order page + */ +static inline bool +buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn, + struct page *page, unsigned int order) +{ + struct page *higher_page, *higher_buddy; + unsigned long combined_pfn; + + if (order >= MAX_ORDER - 2) + return false; + + if (!pfn_valid_within(buddy_pfn)) + return false; + + combined_pfn = buddy_pfn & pfn; + higher_page = page + (combined_pfn - pfn); + buddy_pfn = __find_buddy_pfn(combined_pfn, order + 1); + higher_buddy = higher_page + (buddy_pfn - combined_pfn); + + return pfn_valid_within(buddy_pfn) && + page_is_buddy(higher_page, higher_buddy, order + 1); +} + +/* * Freeing function for a buddy system allocator. * * The concept of a buddy system is to maintain direct-mapped table @@ -906,11 +936,13 @@ static inline void __free_one_page(struct page *page, struct zone *zone, unsigned int order, int migratetype) { - unsigned long combined_pfn; + struct capture_control *capc = task_capc(zone); unsigned long uninitialized_var(buddy_pfn); - struct page *buddy; + unsigned long combined_pfn; + struct free_area *area; unsigned int max_order; - struct capture_control *capc = task_capc(zone); + struct page *buddy; + bool to_tail; max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1); @@ -979,35 +1011,16 @@ static inline void __free_one_page(struct page *page, done_merging: set_page_order(page, order); - /* - * If this is not the largest possible page, check if the buddy - * of the next-highest order is free. If it is, it's possible - * that pages are being freed that will coalesce soon. In case, - * that is happening, add the free page to the tail of the list - * so it's less likely to be used soon and more likely to be merged - * as a higher order page - */ - if ((order < MAX_ORDER-2) && pfn_valid_within(buddy_pfn) - && !is_shuffle_order(order)) { - struct page *higher_page, *higher_buddy; - combined_pfn = buddy_pfn & pfn; - higher_page = page + (combined_pfn - pfn); - buddy_pfn = __find_buddy_pfn(combined_pfn, order + 1); - higher_buddy = higher_page + (buddy_pfn - combined_pfn); - if (pfn_valid_within(buddy_pfn) && - page_is_buddy(higher_page, higher_buddy, order + 1)) { - add_to_free_area_tail(page, &zone->free_area[order], - migratetype); - return; - } - } - + area = &zone->free_area[order]; if (is_shuffle_order(order)) - add_to_free_area_random(page, &zone->free_area[order], - migratetype); + to_tail = shuffle_pick_tail(); else - add_to_free_area(page, &zone->free_area[order], migratetype); + to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order); + if (to_tail) + add_to_free_area_tail(page, area, migratetype); + else + add_to_free_area(page, area, migratetype); } /* diff --git a/mm/shuffle.c b/mm/shuffle.c index b3fe97fd6654..e65d57f39486 100644 --- a/mm/shuffle.c +++ b/mm/shuffle.c @@ -183,11 +183,11 @@ void __meminit __shuffle_free_memory(pg_data_t *pgdat) shuffle_zone(z); } -void add_to_free_area_random(struct page *page, struct free_area *area, - int migratetype) +bool shuffle_pick_tail(void) { static u64 rand; static u8 rand_bits; + bool ret; /* * The lack of locking is deliberate. If 2 threads race to @@ -198,10 +198,10 @@ void add_to_free_area_random(struct page *page, struct free_area *area, rand = get_random_u64(); } - if (rand & 1) - add_to_free_area(page, area, migratetype); - else - add_to_free_area_tail(page, area, migratetype); + ret = rand & 1; + rand_bits--; rand >>= 1; + + return ret; } diff --git a/mm/shuffle.h b/mm/shuffle.h index 777a257a0d2f..4d79f03b6658 100644 --- a/mm/shuffle.h +++ b/mm/shuffle.h @@ -22,6 +22,7 @@ enum mm_shuffle_ctl { DECLARE_STATIC_KEY_FALSE(page_alloc_shuffle_key); extern void page_alloc_shuffle(enum mm_shuffle_ctl ctl); extern void __shuffle_free_memory(pg_data_t *pgdat); +extern bool shuffle_pick_tail(void); static inline void shuffle_free_memory(pg_data_t *pgdat) { if (!static_branch_unlikely(&page_alloc_shuffle_key)) @@ -44,6 +45,11 @@ static inline bool is_shuffle_order(int order) return order >= SHUFFLE_ORDER; } #else +static inline bool shuffle_pick_tail(void) +{ + return false; +} + static inline void shuffle_free_memory(pg_data_t *pgdat) { }