From patchwork Tue Jan 15 16:51:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arun KS X-Patchwork-Id: 10764831 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 70B486C5 for ; Tue, 15 Jan 2019 16:53:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5E8512D4BF for ; Tue, 15 Jan 2019 16:53:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 52A4C2D5AC; Tue, 15 Jan 2019 16:53:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 734122D4BF for ; Tue, 15 Jan 2019 16:53:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3DED58E0003; Tue, 15 Jan 2019 11:53:19 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 38D728E0002; Tue, 15 Jan 2019 11:53:19 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27C388E0003; Tue, 15 Jan 2019 11:53:19 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by kanga.kvack.org (Postfix) with ESMTP id DCB7F8E0002 for ; Tue, 15 Jan 2019 11:53:18 -0500 (EST) Received: by mail-pf1-f199.google.com with SMTP id m3so2359901pfj.14 for ; Tue, 15 Jan 2019 08:53:18 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id; bh=RzxHYIFZBOMpcOxPdlSllHAWfUhX5dgIPw1AjclzRZU=; b=mw6Sa6tCt/4MkcoKdVVwRETezOd17RbJv7X+1Z/0Q8BhowmN2VtMJwCc0KzlAYFFU4 qS3zhZLD+hhFtBmbEO0CGf2bgoPhJLKnVRgVIqFcgBkebZ7y02gOWBIuWnthKkKEHOYV 5cm/YBlsQCfJEdjEmGR1GVhhTC9Bj79sY4tjwxazFqjoHYgBC4gH5YQpJCMwXGc9Wz6d e6nFX9s0GKlSNT6ZCcNhSt6NtgkMRUvORV9QWA2xMdQKlrzy7Z8vDlaMTndRHHRhliHV McQWrKYNUpdVu/5cFxKp6j8QkTBHR3T95I5qaH9NIVnTtKPQdSoUYBOicGYE2sBqyzvE kajQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of arunks@qualcomm.com designates 103.229.18.198 as permitted sender) smtp.mailfrom=arunks@qualcomm.com X-Gm-Message-State: AJcUukfz7B5bFPN5PeL9tGY29vUVcVZU6sEDpULPFcC25ztF+GkHP6EV HEs0VZqseo9DtSptabTLe0BBS+affj3x5h3iNUk2jiHO6kb1DWdI4sSaC1aMG9nG0JfX3vH5UtE C+UCrHDlYFQKNBz+Sha3shvSptUcuFhpe+N8DtCJ1Tn9pKAss51qnUybDfH0HRmM= X-Received: by 2002:a63:a064:: with SMTP id u36mr4620369pgn.145.1547571198522; Tue, 15 Jan 2019 08:53:18 -0800 (PST) X-Google-Smtp-Source: ALg8bN6xAC3akMXuwNx66TWEer4rj78vE2hpCNx9m352h125MjNMiRYPxc3ANwLRbHHgbjyerjW+ X-Received: by 2002:a63:a064:: with SMTP id u36mr4620312pgn.145.1547571197292; Tue, 15 Jan 2019 08:53:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547571197; cv=none; d=google.com; s=arc-20160816; b=WYZH+ANmO4t8PQhC+bnxMFYJIsgU5h/8johQux0Eofo5XXwMr19o/U2zH8aA+O2H6o JWfOMkjyiWgA7U5B5VM9TXwQWSlrzUUMN47/FdV3J0r8Vi+yDZn3rctkATloOZp78NsU TuD7MsehPGeh9wTE3CSDjuV17GQo+Aa5fkbseO4O7vyBoSYnKQj9UfwPpX556Av0p27p nsH/MEewbsV7yhIFAfGZ8qEXunisLEYesiBJkIEvzbxO+wDvfQFvbwwnNOHEnI1WNnEU auuzzJFn3ZfKxWFCcnemmNQoEVQyk0sBnYR+K+RDHWIqweuy9ovsuGjd0xjpsw6ClVZL wfYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from; bh=RzxHYIFZBOMpcOxPdlSllHAWfUhX5dgIPw1AjclzRZU=; b=P9PvZ+PL+Nsz5xOz1NoTHR8+gGpFi6NotsYeDPCd/juT/LBrZfISbxrDGkzKuATwNA y4pq4TZfLnzgfpgupwDNMnWHrQVrTqZTk8px/VuWyXaArXdPk1drU1VLvF4myuM51vvY H4Z8TJLWSpU780dXQsMEZQQpHDZn6uNweEpuUBV7fd7/O8trduELcHUqQVr259lWPK8T cfeDcH5ovbGDxsd4mQ+R3Spd3v4TrVQvwxoA9iH8cz6CNTz1dn3ht6qSvdx8XpNe4h8R 9Hds8QXv3mQiMdensKb7Tu2MR6BgzjMrsg7h/FAWb3NWI/jDP2VH/BYoD5RkgmCYELce cXnA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of arunks@qualcomm.com designates 103.229.18.198 as permitted sender) smtp.mailfrom=arunks@qualcomm.com Received: from alexa-out-blr.qualcomm.com (alexa-out-blr-02.qualcomm.com. [103.229.18.198]) by mx.google.com with ESMTPS id m11si3580155plt.26.2019.01.15.08.53.16 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Jan 2019 08:53:17 -0800 (PST) Received-SPF: pass (google.com: domain of arunks@qualcomm.com designates 103.229.18.198 as permitted sender) client-ip=103.229.18.198; Authentication-Results: mx.google.com; spf=pass (google.com: domain of arunks@qualcomm.com designates 103.229.18.198 as permitted sender) smtp.mailfrom=arunks@qualcomm.com X-IronPort-AV: E=Sophos;i="5.56,481,1539628200"; d="scan'208";a="308052" Received: from ironmsg04-blr.qualcomm.com ([10.86.208.133]) by alexa-out-blr.qualcomm.com with ESMTP/TLS/AES256-SHA; 15 Jan 2019 22:21:14 +0530 X-IronPort-AV: E=McAfee;i="5900,7806,9137"; a="3248431" Received: from blr-ubuntu-104.ap.qualcomm.com (HELO blr-ubuntu-104.qualcomm.com) ([10.79.40.64]) by ironmsg04-blr.qualcomm.com with ESMTP; 15 Jan 2019 22:21:15 +0530 Received: by blr-ubuntu-104.qualcomm.com (Postfix, from userid 346745) id B46843A43; Tue, 15 Jan 2019 22:21:13 +0530 (IST) From: Arun KS To: arunks.linux@gmail.com, alexander.h.duyck@linux.intel.com, akpm@linux-foundation.org, mhocko@kernel.org, vbabka@suse.cz, osalvador@suse.de, linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: getarunks@gmail.com, Arun KS Subject: [PATCH v10] mm/page_alloc.c: memory_hotplug: free pages as higher order Date: Tue, 15 Jan 2019 22:21:08 +0530 Message-Id: <1547571068-18902-1-git-send-email-arunks@codeaurora.org> X-Mailer: git-send-email 1.9.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP When freeing pages are done with higher order, time spent on coalescing pages by buddy allocator can be reduced. With section size of 256MB, hot add latency of a single section shows improvement from 50-60 ms to less than 1 ms, hence improving the hot add latency by 60 times. Modify external providers of online callback to align with the change. Signed-off-by: Arun KS Acked-by: Michal Hocko Reviewed-by: Oscar Salvador Reviewed-by: Alexander Duyck --- Changes since v9: - Fix condition check in hv_ballon driver. Changes since v8: - Remove return type change for online_page_callback. - Use consistent names for external online_page providers. - Fix onlined_pages accounting. Changes since v7: - Rebased to 5.0-rc1. - Fixed onlined_pages accounting. - Added comment for return value of online_page_callback. - Renamed xen_bring_pgs_online to xen_online_pages. Changes since v6: - Rebased to 4.20 - Changelog updated. - No improvement seen on arm64, hence removed removal of prefetch. Changes since v5: - Rebased to 4.20-rc1. - Changelog updated. Changes since v4: - As suggested by Michal Hocko, - Simplify logic in online_pages_block() by using get_order(). - Seperate out removal of prefetch from __free_pages_core(). Changes since v3: - Renamed _free_pages_boot_core -> __free_pages_core. - Removed prefetch from __free_pages_core. - Removed xen_online_page(). Changes since v2: - Reuse code from __free_pages_boot_core(). Changes since v1: - Removed prefetch(). Changes since RFC: - Rebase. - As suggested by Michal Hocko remove pages_per_block. - Modifed external providers of online_page_callback. v9: https://lore.kernel.org/patchwork/patch/1030806/ v8: https://lore.kernel.org/patchwork/patch/1030332/ v7: https://lore.kernel.org/patchwork/patch/1028908/ v6: https://lore.kernel.org/patchwork/patch/1007253/ v5: https://lore.kernel.org/patchwork/patch/995739/ v4: https://lore.kernel.org/patchwork/patch/995111/ v3: https://lore.kernel.org/patchwork/patch/992348/ v2: https://lore.kernel.org/patchwork/patch/991363/ v1: https://lore.kernel.org/patchwork/patch/989445/ RFC: https://lore.kernel.org/patchwork/patch/984754/ --- drivers/hv/hv_balloon.c | 4 ++-- drivers/xen/balloon.c | 15 ++++++++++----- include/linux/memory_hotplug.h | 2 +- mm/internal.h | 1 + mm/memory_hotplug.c | 37 +++++++++++++++++++++++++------------ mm/page_alloc.c | 8 ++++---- 6 files changed, 45 insertions(+), 25 deletions(-) diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c index 5301fef..2ced9a7 100644 --- a/drivers/hv/hv_balloon.c +++ b/drivers/hv/hv_balloon.c @@ -771,7 +771,7 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size, } } -static void hv_online_page(struct page *pg) +static void hv_online_page(struct page *pg, unsigned int order) { struct hv_hotadd_state *has; unsigned long flags; @@ -780,10 +780,11 @@ static void hv_online_page(struct page *pg) spin_lock_irqsave(&dm_device.ha_lock, flags); list_for_each_entry(has, &dm_device.ha_region_list, list) { /* The page belongs to a different HAS. */ - if ((pfn < has->start_pfn) || (pfn >= has->end_pfn)) + if ((pfn < has->start_pfn) || + (pfn + (1UL << order) >= has->end_pfn)) continue; - hv_page_online_one(has, pg); + hv_bring_pgs_online(has, pfn, (1UL << order)); break; } spin_unlock_irqrestore(&dm_device.ha_lock, flags); diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index ceb5048..d107447 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -369,14 +369,19 @@ static enum bp_state reserve_additional_memory(void) return BP_ECANCELED; } -static void xen_online_page(struct page *page) +static void xen_online_page(struct page *page, unsigned int order) { - __online_page_set_limits(page); + unsigned long i, size = (1 << order); + unsigned long start_pfn = page_to_pfn(page); + struct page *p; + pr_debug("Online %lu pages starting at pfn 0x%lx\n", size, start_pfn); mutex_lock(&balloon_mutex); - - __balloon_append(page); - + for (i = 0; i < size; i++) { + p = pfn_to_page(start_pfn + i); + __online_page_set_limits(p); + __balloon_append(p); + } mutex_unlock(&balloon_mutex); } diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 07da5c6..e368730 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -87,7 +87,7 @@ extern int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn, unsigned long *valid_start, unsigned long *valid_end); extern void __offline_isolated_pages(unsigned long, unsigned long); -typedef void (*online_page_callback_t)(struct page *page); +typedef void (*online_page_callback_t)(struct page *page, unsigned int order); extern int set_online_page_callback(online_page_callback_t callback); extern int restore_online_page_callback(online_page_callback_t callback); diff --git a/mm/internal.h b/mm/internal.h index f4a7bb0..536bc2a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -163,6 +163,7 @@ static inline struct page *pageblock_pfn_to_page(unsigned long start_pfn, extern int __isolate_free_page(struct page *page, unsigned int order); extern void memblock_free_pages(struct page *page, unsigned long pfn, unsigned int order); +extern void __free_pages_core(struct page *page, unsigned int order); extern void prep_compound_page(struct page *page, unsigned int order); extern void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index b9a667d..77dff24 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -47,7 +47,7 @@ * and restore_online_page_callback() for generic callback restore. */ -static void generic_online_page(struct page *page); +static void generic_online_page(struct page *page, unsigned int order); static online_page_callback_t online_page_callback = generic_online_page; static DEFINE_MUTEX(online_page_callback_lock); @@ -656,26 +656,39 @@ void __online_page_free(struct page *page) } EXPORT_SYMBOL_GPL(__online_page_free); -static void generic_online_page(struct page *page) +static void generic_online_page(struct page *page, unsigned int order) { - __online_page_set_limits(page); - __online_page_increment_counters(page); - __online_page_free(page); + __free_pages_core(page, order); + totalram_pages_add(1UL << order); +#ifdef CONFIG_HIGHMEM + if (PageHighMem(page)) + totalhigh_pages_add(1UL << order); +#endif +} + +static int online_pages_blocks(unsigned long start, unsigned long nr_pages) +{ + unsigned long end = start + nr_pages; + int order, ret, onlined_pages = 0; + + while (start < end) { + order = min(MAX_ORDER - 1, + get_order(PFN_PHYS(end) - PFN_PHYS(start))); + (*online_page_callback)(pfn_to_page(start), order); + + onlined_pages += (1UL << order); + start += (1UL << order); + } + return onlined_pages; } static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages, void *arg) { - unsigned long i; unsigned long onlined_pages = *(unsigned long *)arg; - struct page *page; if (PageReserved(pfn_to_page(start_pfn))) - for (i = 0; i < nr_pages; i++) { - page = pfn_to_page(start_pfn + i); - (*online_page_callback)(page); - onlined_pages++; - } + onlined_pages += online_pages_blocks(start_pfn, nr_pages); online_mem_sections(start_pfn, start_pfn + nr_pages); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d295c9b..883212a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1303,7 +1303,7 @@ static void __free_pages_ok(struct page *page, unsigned int order) local_irq_restore(flags); } -static void __init __free_pages_boot_core(struct page *page, unsigned int order) +void __free_pages_core(struct page *page, unsigned int order) { unsigned int nr_pages = 1 << order; struct page *p = page; @@ -1382,7 +1382,7 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, { if (early_page_uninitialised(pfn)) return; - return __free_pages_boot_core(page, order); + __free_pages_core(page, order); } /* @@ -1472,14 +1472,14 @@ static void __init deferred_free_range(unsigned long pfn, if (nr_pages == pageblock_nr_pages && (pfn & (pageblock_nr_pages - 1)) == 0) { set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_boot_core(page, pageblock_order); + __free_pages_core(page, pageblock_order); return; } for (i = 0; i < nr_pages; i++, page++, pfn++) { if ((pfn & (pageblock_nr_pages - 1)) == 0) set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_boot_core(page, 0); + __free_pages_core(page, 0); } }