From patchwork Tue Dec 22 07:46:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liang Li X-Patchwork-Id: 11986075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.0 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72A0FC433DB for ; Tue, 22 Dec 2020 07:48:05 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 05C9C225A9 for ; Tue, 22 Dec 2020 07:48:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 05C9C225A9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:42946 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1krcOp-0006oE-U1 for qemu-devel@archiver.kernel.org; Tue, 22 Dec 2020 02:48:03 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:33618) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1krcNw-0006FX-RT for qemu-devel@nongnu.org; Tue, 22 Dec 2020 02:47:09 -0500 Received: from mail-pl1-x636.google.com ([2607:f8b0:4864:20::636]:35278) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1krcNs-0004oL-Nr for qemu-devel@nongnu.org; Tue, 22 Dec 2020 02:47:08 -0500 Received: by mail-pl1-x636.google.com with SMTP id g3so7015497plp.2 for ; Mon, 21 Dec 2020 23:47:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:date:to:cc:subject:message-id:mail-followup-to:mime-version :content-disposition:user-agent; bh=/QJ3etTHXwFO0sfMopiRtUfgyd8Q29xDr/I07BPqM8U=; b=p1puU9SvCg3tvRbcGQ2hkXUCdUO3WJvuJ0kA0iX8BGOapHSTKYfJry17VM9Ey6lypQ 4hpG9raA8NszOIlfeBuZHg17/1r2oCx3HttayO4xMbwC+hP1ZgDYaDmD5Fgy+DvK95Rk qtWsL8AymLE2e20dA6bB4XcWQmVoPTelQgifD9ndpgoFSdgz8fMvJ9I2WB64wjXumO14 1bN6WtGrUvh1qRIUWsI3nKlAGZIMMKJZ2U5PbZV/H1RM1Y7Q7D94GIr9e5mcz5LB9RIP cnGCyjlLNSO+I63EtbqMrGG5O12bB+pDSLW9Px60C5b/cOIZgxoM8ebM6/Jj2tiR6HRh tCug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:date:to:cc:subject:message-id :mail-followup-to:mime-version:content-disposition:user-agent; bh=/QJ3etTHXwFO0sfMopiRtUfgyd8Q29xDr/I07BPqM8U=; b=NUaZJ2Bk75ZvO2f8UWgTp/e3QycUlx+X+G9JHmj4Oleufcg4ERIEc1aL2coHrtpKGp Q8mcJE9WTc2WuJGrhnRc82VU1cZD/WdqgGUfVYhxX8QcFWJuyCUzS6cO6ZdOHNhH3kNH u7RXvYoqEVASOCR0Wt+U9/QIwNJ82w8wcF9iD7SDIdH8I5Sl4wgMA57vK6aaRsevjQ6V Ygp8FJAAmixfySQnz/U0/0M6YCP6fPpCHr9k63Ki/UVPCyTC+j2RGnLMrlr5Q3CrT027 UpukspfshZO+zldensxSdxybosuRXBtdalkPVu8318odYPG8EW36hCRsVhp0cVu3wnGP NGmA== X-Gm-Message-State: AOAM5338ULTH6bmGguCgQ+dwD+9VWlTL2+cJigsfY7KLJc+kXYfyvBNR 6O3sImyfaEGMjV0LT7B1/0g= X-Google-Smtp-Source: ABdhPJw0yHdBviDxZW6i1d1c62iRZvw98rgXLaZUewI87E4M5nHhPAz65S05jA6v2+0F4gqhWt1Wzw== X-Received: by 2002:a17:902:7449:b029:dc:bc:65de with SMTP id e9-20020a1709027449b02900dc00bc65demr15697195plt.79.1608623222955; Mon, 21 Dec 2020 23:47:02 -0800 (PST) Received: from open-light-1.localdomain (66.98.113.28.16clouds.com. [66.98.113.28]) by smtp.gmail.com with ESMTPSA id 7sm18884115pfu.2.2020.12.21.23.47.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 21 Dec 2020 23:47:02 -0800 (PST) From: Liang Li X-Google-Original-From: Liang Li Date: Tue, 22 Dec 2020 02:46:59 -0500 To: Alexander Duyck , Mel Gorman , Andrew Morton , Andrea Arcangeli , Dan Williams , "Michael S. Tsirkin" , David Hildenbrand , Jason Wang , Dave Hansen , Michal Hocko , Liang Li , Mike Kravetz , Liang Li Subject: [RFC PATCH 1/3] mm: support hugetlb free page reporting Message-ID: <20201222074656.GA30035@open-light-1.localdomain> Mail-Followup-To: Alexander Duyck , Mel Gorman , Andrew Morton , Andrea Arcangeli , Dan Williams , "Michael S. Tsirkin" , David Hildenbrand , Jason Wang , Dave Hansen , Michal Hocko , Liang Li , Mike Kravetz , Liang Li , linux-mm@kvack.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, qemu-devel@nongnu.org MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) Received-SPF: pass client-ip=2607:f8b0:4864:20::636; envelope-from=liliang.opensource@gmail.com; helo=mail-pl1-x636.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-devel@nongnu.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Free page reporting only supports buddy pages, it can't report the free pages reserved for hugetlbfs case. On the other hand, hugetlbfs is a good choice for a system with a huge amount of RAM, because it can help to reduce the memory management overhead and improve system performance. This patch add the support for reporting hugepages in the free list of hugetlb, it canbe used by virtio_balloon driver for memory overcommit and pre zero out free pages for speeding up memory population. Cc: Alexander Duyck Cc: Mel Gorman Cc: Andrea Arcangeli Cc: Dan Williams Cc: Dave Hansen Cc: David Hildenbrand Cc: Michal Hocko Cc: Andrew Morton Cc: Alex Williamson Cc: Michael S. Tsirkin Cc: Jason Wang Cc: Mike Kravetz Cc: Liang Li Signed-off-by: Liang Li --- include/linux/hugetlb.h | 3 + include/linux/page_reporting.h | 5 + mm/hugetlb.c | 29 ++++ mm/page_reporting.c | 287 +++++++++++++++++++++++++++++++++ mm/page_reporting.h | 34 ++++ 5 files changed, 358 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ebca2ef02212..a72ad25501d3 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -11,6 +11,7 @@ #include #include #include +#include struct ctl_table; struct user_struct; @@ -114,6 +115,8 @@ int hugetlb_treat_movable_handler(struct ctl_table *, int, void *, size_t *, int hugetlb_mempolicy_sysctl_handler(struct ctl_table *, int, void *, size_t *, loff_t *); +bool isolate_free_huge_page(struct page *page, struct hstate *h, int nid); +void putback_isolate_huge_page(struct hstate *h, struct page *page); int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, struct vm_area_struct *); long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, struct page **, struct vm_area_struct **, diff --git a/include/linux/page_reporting.h b/include/linux/page_reporting.h index 63e1e9fbcaa2..0da3d1a6f0cc 100644 --- a/include/linux/page_reporting.h +++ b/include/linux/page_reporting.h @@ -7,6 +7,7 @@ /* This value should always be a power of 2, see page_reporting_cycle() */ #define PAGE_REPORTING_CAPACITY 32 +#define HUGEPAGE_REPORTING_CAPACITY 1 struct page_reporting_dev_info { /* function that alters pages to make them "reported" */ @@ -26,4 +27,8 @@ struct page_reporting_dev_info { /* Tear-down and bring-up for page reporting devices */ void page_reporting_unregister(struct page_reporting_dev_info *prdev); int page_reporting_register(struct page_reporting_dev_info *prdev); + +/* Tear-down and bring-up for hugepage reporting devices */ +void hugepage_reporting_unregister(struct page_reporting_dev_info *prdev); +int hugepage_reporting_register(struct page_reporting_dev_info *prdev); #endif /*_LINUX_PAGE_REPORTING_H */ diff --git a/mm/hugetlb.c b/mm/hugetlb.c index cbf32d2824fd..de6ce147dfe2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -41,6 +41,7 @@ #include #include #include +#include "page_reporting.h" #include "internal.h" int hugetlb_max_hstate __read_mostly; @@ -1028,6 +1029,11 @@ static void enqueue_huge_page(struct hstate *h, struct page *page) list_move(&page->lru, &h->hugepage_freelists[nid]); h->free_huge_pages++; h->free_huge_pages_node[nid]++; + if (hugepage_reported(page)) { + __ClearPageReported(page); + pr_info("%s, free_huge_pages=%ld\n", __func__, h->free_huge_pages); + } + hugepage_reporting_notify_free(h->order); } static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) @@ -5531,6 +5537,29 @@ follow_huge_pgd(struct mm_struct *mm, unsigned long address, pgd_t *pgd, int fla return pte_page(*(pte_t *)pgd) + ((address & ~PGDIR_MASK) >> PAGE_SHIFT); } +bool isolate_free_huge_page(struct page *page, struct hstate *h, int nid) +{ + bool ret = true; + + VM_BUG_ON_PAGE(!PageHead(page), page); + + list_move(&page->lru, &h->hugepage_activelist); + set_page_refcounted(page); + h->free_huge_pages--; + h->free_huge_pages_node[nid]--; + + return ret; +} + +void putback_isolate_huge_page(struct hstate *h, struct page *page) +{ + int nid = page_to_nid(page); + pr_info("%s, free_huge_pages=%ld\n", __func__, h->free_huge_pages); + list_move(&page->lru, &h->hugepage_freelists[nid]); + h->free_huge_pages++; + h->free_huge_pages_node[nid]++; +} + bool isolate_huge_page(struct page *page, struct list_head *list) { bool ret = true; diff --git a/mm/page_reporting.c b/mm/page_reporting.c index 20ec3fb1afc4..15d4b5372df8 100644 --- a/mm/page_reporting.c +++ b/mm/page_reporting.c @@ -7,6 +7,7 @@ #include #include #include +#include #include "page_reporting.h" #include "internal.h" @@ -16,6 +17,10 @@ static struct page_reporting_dev_info __rcu *pr_dev_info __read_mostly; int page_report_mini_order = pageblock_order; unsigned long page_report_batch_size = 32 * 1024 * 1024; +static struct page_reporting_dev_info __rcu *hgpr_dev_info __read_mostly; +int hugepage_report_mini_order = pageblock_order; +unsigned long hugepage_report_batch_size = 64 * 1024 * 1024; + enum { PAGE_REPORTING_IDLE = 0, PAGE_REPORTING_REQUESTED, @@ -67,6 +72,24 @@ void __page_reporting_notify(void) rcu_read_unlock(); } +/* notify prdev of free hugepage reporting request */ +void __hugepage_reporting_notify(void) +{ + struct page_reporting_dev_info *prdev; + + /* + * We use RCU to protect the pr_dev_info pointer. In almost all + * cases this should be present, however in the unlikely case of + * a shutdown this will be NULL and we should exit. + */ + rcu_read_lock(); + prdev = rcu_dereference(hgpr_dev_info); + if (likely(prdev)) + __page_reporting_request(prdev); + + rcu_read_unlock(); +} + static void page_reporting_drain(struct page_reporting_dev_info *prdev, struct scatterlist *sgl, unsigned int nents, bool reported) @@ -103,6 +126,213 @@ page_reporting_drain(struct page_reporting_dev_info *prdev, sg_init_table(sgl, nents); } +static void +hugepage_reporting_drain(struct page_reporting_dev_info *prdev, + struct hstate *h, struct scatterlist *sgl, + unsigned int nents, bool reported) +{ + struct scatterlist *sg = sgl; + + /* + * Drain the now reported pages back into their respective + * free lists/areas. We assume at least one page is populated. + */ + do { + struct page *page = sg_page(sg); + + putback_isolate_huge_page(h, page); + + /* If the pages were not reported due to error skip flagging */ + if (!reported) + continue; + + __SetPageReported(page); + } while ((sg = sg_next(sg))); + + /* reinitialize scatterlist now that it is empty */ + sg_init_table(sgl, nents); +} + +/* + * The page reporting cycle consists of 4 stages, fill, report, drain, and + * idle. We will cycle through the first 3 stages until we cannot obtain a + * full scatterlist of pages, in that case we will switch to idle. + */ +static int +hugepage_reporting_cycle(struct page_reporting_dev_info *prdev, + struct hstate *h, unsigned int nid, + struct scatterlist *sgl, unsigned int *offset) +{ + struct list_head *list = &h->hugepage_freelists[nid]; + unsigned int page_len = PAGE_SIZE << h->order; + struct page *page, *next; + long budget; + int ret = 0, scan_cnt = 0; + + /* + * Perform early check, if free area is empty there is + * nothing to process so we can skip this free_list. + */ + if (list_empty(list)) + return ret; + + spin_lock_irq(&hugetlb_lock); + + if (huge_page_order(h) > MAX_ORDER) + budget = HUGEPAGE_REPORTING_CAPACITY; + else + budget = HUGEPAGE_REPORTING_CAPACITY * 32; + + /* loop through free list adding unreported pages to sg list */ + list_for_each_entry_safe(page, next, list, lru) { + /* We are going to skip over the reported pages. */ + if (PageReported(page)) { + if (++scan_cnt >= MAX_SCAN_NUM) { + ret = scan_cnt; + break; + } + continue; + } + + /* + * If we fully consumed our budget then update our + * state to indicate that we are requesting additional + * processing and exit this list. + */ + if (budget < 0) { + atomic_set(&prdev->state, PAGE_REPORTING_REQUESTED); + next = page; + break; + } + + /* Attempt to pull page from list and place in scatterlist */ + if (*offset) { + isolate_free_huge_page(page, h, nid); + /* Add page to scatter list */ + --(*offset); + sg_set_page(&sgl[*offset], page, page_len, 0); + + continue; + } + + /* + * Make the first non-processed page in the free list + * the new head of the free list before we release the + * zone lock. + */ + if (&page->lru != list && !list_is_first(&page->lru, list)) + list_rotate_to_front(&page->lru, list); + + /* release lock before waiting on report processing */ + spin_unlock_irq(&hugetlb_lock); + + /* begin processing pages in local list */ + ret = prdev->report(prdev, sgl, HUGEPAGE_REPORTING_CAPACITY); + + /* reset offset since the full list was reported */ + *offset = HUGEPAGE_REPORTING_CAPACITY; + + /* update budget to reflect call to report function */ + budget--; + + /* reacquire zone lock and resume processing */ + spin_lock_irq(&hugetlb_lock); + + /* flush reported pages from the sg list */ + hugepage_reporting_drain(prdev, h, sgl, + HUGEPAGE_REPORTING_CAPACITY, !ret); + + /* + * Reset next to first entry, the old next isn't valid + * since we dropped the lock to report the pages + */ + next = list_first_entry(list, struct page, lru); + + /* exit on error */ + if (ret) + break; + } + + /* Rotate any leftover pages to the head of the freelist */ + if (&next->lru != list && !list_is_first(&next->lru, list)) + list_rotate_to_front(&next->lru, list); + + spin_unlock_irq(&hugetlb_lock); + + return ret; +} + +static int +hugepage_reporting_process_hstate(struct page_reporting_dev_info *prdev, + struct scatterlist *sgl, struct hstate *h) +{ + unsigned int leftover, offset = HUGEPAGE_REPORTING_CAPACITY; + int ret = 0, nid; + + for (nid = 0; nid < MAX_NUMNODES; nid++) { + ret = hugepage_reporting_cycle(prdev, h, nid, sgl, &offset); + + if (ret < 0) + return ret; + } + + /* report the leftover pages before going idle */ + leftover = HUGEPAGE_REPORTING_CAPACITY - offset; + if (leftover) { + sgl = &sgl[offset]; + ret = prdev->report(prdev, sgl, leftover); + + /* flush any remaining pages out from the last report */ + spin_lock_irq(&hugetlb_lock); + hugepage_reporting_drain(prdev, h, sgl, leftover, !ret); + spin_unlock_irq(&hugetlb_lock); + } + + return ret; +} + +static void hugepage_reporting_process(struct work_struct *work) +{ + struct delayed_work *d_work = to_delayed_work(work); + struct page_reporting_dev_info *prdev = container_of(d_work, + struct page_reporting_dev_info, work); + int err = 0, state = PAGE_REPORTING_ACTIVE; + struct scatterlist *sgl; + struct hstate *h; + + /* + * Change the state to "Active" so that we can track if there is + * anyone requests page reporting after we complete our pass. If + * the state is not altered by the end of the pass we will switch + * to idle and quit scheduling reporting runs. + */ + atomic_set(&prdev->state, state); + + /* allocate scatterlist to store pages being reported on */ + sgl = kmalloc_array(HUGEPAGE_REPORTING_CAPACITY, sizeof(*sgl), GFP_KERNEL); + if (!sgl) + goto err_out; + + sg_init_table(sgl, HUGEPAGE_REPORTING_CAPACITY); + + for_each_hstate(h) { + err = hugepage_reporting_process_hstate(prdev, sgl, h); + if (err) + break; + } + + kfree(sgl); +err_out: + /* + * If the state has reverted back to requested then there may be + * additional pages to be processed. We will defer for 2s to allow + * more pages to accumulate. + */ + state = atomic_cmpxchg(&prdev->state, state, PAGE_REPORTING_IDLE); + if (state == PAGE_REPORTING_REQUESTED) + schedule_delayed_work(&prdev->work, prdev->delay_jiffies); +} + /* * The page reporting cycle consists of 4 stages, fill, report, drain, and * idle. We will cycle through the first 3 stages until we cannot obtain a @@ -341,6 +571,9 @@ static void page_reporting_process(struct work_struct *work) static DEFINE_MUTEX(page_reporting_mutex); DEFINE_STATIC_KEY_FALSE(page_reporting_enabled); +static DEFINE_MUTEX(hugepage_reporting_mutex); +DEFINE_STATIC_KEY_FALSE(hugepage_reporting_enabled); + int page_reporting_register(struct page_reporting_dev_info *prdev) { int err = 0; @@ -395,3 +628,57 @@ void page_reporting_unregister(struct page_reporting_dev_info *prdev) mutex_unlock(&page_reporting_mutex); } EXPORT_SYMBOL_GPL(page_reporting_unregister); + +int hugepage_reporting_register(struct page_reporting_dev_info *prdev) +{ + int err = 0; + + mutex_lock(&hugepage_reporting_mutex); + + /* nothing to do if already in use */ + if (rcu_access_pointer(hgpr_dev_info)) { + err = -EBUSY; + goto err_out; + } + + /* initialize state and work structures */ + atomic_set(&prdev->state, PAGE_REPORTING_IDLE); + INIT_DELAYED_WORK(&prdev->work, &hugepage_reporting_process); + + /* Begin initial flush of zones */ + __page_reporting_request(prdev); + + /* Assign device to allow notifications */ + rcu_assign_pointer(hgpr_dev_info, prdev); + + hugepage_report_mini_order = prdev->mini_order; + hugepage_report_batch_size = prdev->batch_size; + + /* enable hugepage reporting notification */ + if (!static_key_enabled(&hugepage_reporting_enabled)) { + static_branch_enable(&hugepage_reporting_enabled); + pr_info("Free hugepage reporting enabled\n"); + } +err_out: + mutex_unlock(&hugepage_reporting_mutex); + + return err; +} +EXPORT_SYMBOL_GPL(hugepage_reporting_register); + +void hugepage_reporting_unregister(struct page_reporting_dev_info *prdev) +{ + mutex_lock(&hugepage_reporting_mutex); + + if (rcu_access_pointer(hgpr_dev_info) == prdev) { + /* Disable page reporting notification */ + RCU_INIT_POINTER(hgpr_dev_info, NULL); + synchronize_rcu(); + + /* Flush any existing work, and lock it out */ + cancel_delayed_work_sync(&prdev->work); + } + + mutex_unlock(&hugepage_reporting_mutex); +} +EXPORT_SYMBOL_GPL(hugepage_reporting_unregister); diff --git a/mm/page_reporting.h b/mm/page_reporting.h index 86ac6ffad970..271c64c3c3cb 100644 --- a/mm/page_reporting.h +++ b/mm/page_reporting.h @@ -18,12 +18,24 @@ extern unsigned long page_report_batch_size; DECLARE_STATIC_KEY_FALSE(page_reporting_enabled); void __page_reporting_notify(void); +extern int hugepage_report_mini_order; +extern unsigned long hugepage_report_batch_size; + +DECLARE_STATIC_KEY_FALSE(hugepage_reporting_enabled); +void __hugepage_reporting_notify(void); + static inline bool page_reported(struct page *page) { return static_branch_unlikely(&page_reporting_enabled) && PageReported(page); } +static inline bool hugepage_reported(struct page *page) +{ + return static_branch_unlikely(&hugepage_reporting_enabled) && + PageReported(page); +} + /** * page_reporting_notify_free - Free page notification to start page processing * @@ -52,11 +64,33 @@ static inline void page_reporting_notify_free(unsigned int order) __page_reporting_notify(); } } + +static inline void hugepage_reporting_notify_free(unsigned int order) +{ + static long batch_size = 0; + + if (!static_branch_unlikely(&hugepage_reporting_enabled)) + return; + + /* Determine if we have crossed reporting threshold */ + if (order < hugepage_report_mini_order) + return; + + batch_size += (1 << order) << PAGE_SHIFT; + if (batch_size >= hugepage_report_batch_size) { + batch_size = 0; + __hugepage_reporting_notify(); + } +} #else /* CONFIG_PAGE_REPORTING */ #define page_reported(_page) false static inline void page_reporting_notify_free(unsigned int order) { } + +static inline void hugepage_reporting_notify_free(unsigned int order) +{ +} #endif /* CONFIG_PAGE_REPORTING */ #endif /*_MM_PAGE_REPORTING_H */ From patchwork Tue Dec 22 07:48:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liang Li X-Patchwork-Id: 11986077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.0 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29E87C433DB for ; Tue, 22 Dec 2020 07:49:08 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A25C8225A9 for ; Tue, 22 Dec 2020 07:49:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A25C8225A9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:45074 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1krcPq-0007h6-IY for qemu-devel@archiver.kernel.org; Tue, 22 Dec 2020 02:49:06 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:33824) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1krcP6-00070x-2N for qemu-devel@nongnu.org; Tue, 22 Dec 2020 02:48:21 -0500 Received: from mail-pj1-x1030.google.com ([2607:f8b0:4864:20::1030]:53456) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1krcP4-0005K9-4h for qemu-devel@nongnu.org; Tue, 22 Dec 2020 02:48:19 -0500 Received: by mail-pj1-x1030.google.com with SMTP id iq13so883274pjb.3 for ; Mon, 21 Dec 2020 23:48:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:date:to:cc:subject:message-id:mail-followup-to:mime-version :content-disposition:user-agent; bh=i3bfHEMTyy3RMCDYIydmrpz6KtdROg7RvoHxWojHkr0=; b=D1GHHgB6ojK9kFQC/HqZtz1VG49st4P/x1r7Gk2gUQTRxEFN899dkPc7TLy4YHxKwt UyZA9j9NI9zEHmeu2BOHZbb59/+OmLAzjp6tU9y7bHvcku937QY7HLIdHqr6azr0VZp2 Hvi+wQQXKbxX4gBZNp3ZAZ/u8Q8KNSkv3a9EjKiuKOvVMHDGkq32mMcvVB3ktpdNUiJ4 v4UnYrMYZANTliJLq/rOqgBGCxDghtjPlHSABDP7j4ez9OqB+toNiEbojmRIihIWPPd4 SSD6YYG+N1Myh0gfkdPkSOPdl/C3NSmvu6/JB8IqIlEp3q662kKLVkik5X5aaTrO8Lpn PEZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:date:to:cc:subject:message-id :mail-followup-to:mime-version:content-disposition:user-agent; bh=i3bfHEMTyy3RMCDYIydmrpz6KtdROg7RvoHxWojHkr0=; b=HxU2t8d/mPifQufhFLlc+8Z7zH+zjJc1f6aLKLceW8r+6umFgfeEy9Eeh5s7IhH0xf v7NcHTY8DrSZZYpAXoM3sjxnOW8aV/lx7/Ra30BSj45gxWq7Mi/8rhGPY+WDet4kAnfG VobyHboZTzd64KOBYCJPw2iKEThNohlgYS16BtlBRBPH5/wvF/7LEG8yCQlqmFyQ4NMQ Mz8f9IyvX3ev1o1J1PT0q22kxApDnrIw5lShg2TGGRqK51kGOuEF1SkvMjcBW3PvwkOm ZNMyM+/f5zydCs8oahi+HTlLPkzc+QnaeR3s40JuW0Qy8ngoWq/1r0Hofwc/TIncTlBH Swkw== X-Gm-Message-State: AOAM532wNCPNjDwMsJ9fJuSk8a5fJTa1uAa1btkVQkYdJ03fSdcg48/0 4Z6rn5wzaD2tMbtONyh06Ks= X-Google-Smtp-Source: ABdhPJze6WndpyzBv8oA+rCboDsQWgmQOwnmfLMGSWq7A6Jn6aYr6PdovCpLTkyvj0c+KuPIFMKxaQ== X-Received: by 2002:a17:902:6b89:b029:da:fc41:baec with SMTP id p9-20020a1709026b89b02900dafc41baecmr20012241plk.39.1608623296624; Mon, 21 Dec 2020 23:48:16 -0800 (PST) Received: from open-light-1.localdomain (66.98.113.28.16clouds.com. [66.98.113.28]) by smtp.gmail.com with ESMTPSA id 36sm12872029pgr.56.2020.12.21.23.48.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 21 Dec 2020 23:48:16 -0800 (PST) From: Liang Li X-Google-Original-From: Liang Li Date: Tue, 22 Dec 2020 02:48:13 -0500 To: Alexander Duyck , Mel Gorman , Andrew Morton , Andrea Arcangeli , Dan Williams , "Michael S. Tsirkin" , David Hildenbrand , Jason Wang , Dave Hansen , Michal Hocko , Liang Li , Mike Kravetz , Liang Li Subject: [RFC PATCH 2/3] virtio-balloon: add support for providing free huge page reports to host Message-ID: <20201222074810.GA30047@open-light-1.localdomain> Mail-Followup-To: Alexander Duyck , Mel Gorman , Andrew Morton , Andrea Arcangeli , Dan Williams , "Michael S. Tsirkin" , David Hildenbrand , Jason Wang , Dave Hansen , Michal Hocko , Liang Li , Mike Kravetz , Liang Li , linux-mm@kvack.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, qemu-devel@nongnu.org MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) Received-SPF: pass client-ip=2607:f8b0:4864:20::1030; envelope-from=liliang.opensource@gmail.com; helo=mail-pj1-x1030.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-devel@nongnu.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Free page reporting only supports buddy pages, it can't report the free pages reserved for hugetlbfs case. On the other hand, hugetlbfs is a good choice for a system with a huge amount of RAM, because it can help to reduce the memory management overhead and improve system performance. This patch add support for reporting free hugepage to host when guest use hugetlbfs. A new feature bit and a new vq is added for this new feature. Cc: Alexander Duyck Cc: Mel Gorman Cc: Andrea Arcangeli Cc: Dan Williams Cc: Dave Hansen Cc: David Hildenbrand Cc: Michal Hocko Cc: Andrew Morton Cc: Alex Williamson Cc: Michael S. Tsirkin Cc: Jason Wang Cc: Mike Kravetz Cc: Liang Li Signed-off-by: Liang Li --- drivers/virtio/virtio_balloon.c | 61 +++++++++++++++++++++++++++++ include/uapi/linux/virtio_balloon.h | 1 + 2 files changed, 62 insertions(+) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index a298517079bb..61363dfd3c2d 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -52,6 +52,7 @@ enum virtio_balloon_vq { VIRTIO_BALLOON_VQ_STATS, VIRTIO_BALLOON_VQ_FREE_PAGE, VIRTIO_BALLOON_VQ_REPORTING, + VIRTIO_BALLOON_VQ_HPG_REPORTING, VIRTIO_BALLOON_VQ_MAX }; @@ -126,6 +127,10 @@ struct virtio_balloon { /* Free page reporting device */ struct virtqueue *reporting_vq; struct page_reporting_dev_info pr_dev_info; + + /* Free hugepage reporting device */ + struct virtqueue *hpg_reporting_vq; + struct page_reporting_dev_info hpr_dev_info; }; static const struct virtio_device_id id_table[] = { @@ -192,6 +197,33 @@ static int virtballoon_free_page_report(struct page_reporting_dev_info *pr_dev_i return 0; } +static int virtballoon_free_hugepage_report(struct page_reporting_dev_info *hpr_dev_info, + struct scatterlist *sg, unsigned int nents) +{ + struct virtio_balloon *vb = + container_of(hpr_dev_info, struct virtio_balloon, hpr_dev_info); + struct virtqueue *vq = vb->hpg_reporting_vq; + unsigned int unused, err; + + /* We should always be able to add these buffers to an empty queue. */ + err = virtqueue_add_inbuf(vq, sg, nents, vb, GFP_NOWAIT | __GFP_NOWARN); + + /* + * In the extremely unlikely case that something has occurred and we + * are able to trigger an error we will simply display a warning + * and exit without actually processing the pages. + */ + if (WARN_ON_ONCE(err)) + return err; + + virtqueue_kick(vq); + + /* When host has read buffer, this completes via balloon_ack */ + wait_event(vb->acked, virtqueue_get_buf(vq, &unused)); + + return 0; +} + static void set_page_pfns(struct virtio_balloon *vb, __virtio32 pfns[], struct page *page) { @@ -515,6 +547,7 @@ static int init_vqs(struct virtio_balloon *vb) callbacks[VIRTIO_BALLOON_VQ_FREE_PAGE] = NULL; names[VIRTIO_BALLOON_VQ_FREE_PAGE] = NULL; names[VIRTIO_BALLOON_VQ_REPORTING] = NULL; + names[VIRTIO_BALLOON_VQ_HPG_REPORTING] = NULL; if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { names[VIRTIO_BALLOON_VQ_STATS] = "stats"; @@ -531,6 +564,11 @@ static int init_vqs(struct virtio_balloon *vb) callbacks[VIRTIO_BALLOON_VQ_REPORTING] = balloon_ack; } + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_HPG_REPORTING)) { + names[VIRTIO_BALLOON_VQ_HPG_REPORTING] = "hpg_reporting_vq"; + callbacks[VIRTIO_BALLOON_VQ_HPG_REPORTING] = balloon_ack; + } + err = vb->vdev->config->find_vqs(vb->vdev, VIRTIO_BALLOON_VQ_MAX, vqs, callbacks, names, NULL, NULL); if (err) @@ -566,6 +604,8 @@ static int init_vqs(struct virtio_balloon *vb) if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_REPORTING)) vb->reporting_vq = vqs[VIRTIO_BALLOON_VQ_REPORTING]; + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_HPG_REPORTING)) + vb->hpg_reporting_vq = vqs[VIRTIO_BALLOON_VQ_HPG_REPORTING]; return 0; } @@ -1001,6 +1041,24 @@ static int virtballoon_probe(struct virtio_device *vdev) goto out_unregister_oom; } + vb->hpr_dev_info.report = virtballoon_free_hugepage_report; + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_HPG_REPORTING)) { + unsigned int capacity; + + capacity = virtqueue_get_vring_size(vb->hpg_reporting_vq); + if (capacity < PAGE_REPORTING_CAPACITY) { + err = -ENOSPC; + goto out_unregister_oom; + } + + vb->hpr_dev_info.mini_order = 0; + vb->hpr_dev_info.batch_size = 2 * 1024 * 1024; /* 2M */ + vb->hpr_dev_info.delay_jiffies = 1 * HZ; /* 1 seconds */ + err = hugepage_reporting_register(&vb->hpr_dev_info); + if (err) + goto out_unregister_oom; + } + virtio_device_ready(vdev); if (towards_target(vb)) @@ -1053,6 +1111,8 @@ static void virtballoon_remove(struct virtio_device *vdev) if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_REPORTING)) page_reporting_unregister(&vb->pr_dev_info); + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_HPG_REPORTING)) + hugepage_reporting_unregister(&vb->hpr_dev_info); if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) unregister_oom_notifier(&vb->oom_nb); if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) @@ -1133,6 +1193,7 @@ static unsigned int features[] = { VIRTIO_BALLOON_F_FREE_PAGE_HINT, VIRTIO_BALLOON_F_PAGE_POISON, VIRTIO_BALLOON_F_REPORTING, + VIRTIO_BALLOON_F_HPG_REPORTING, }; static struct virtio_driver virtio_balloon_driver = { diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h index ddaa45e723c4..8ca8f89d95c6 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -37,6 +37,7 @@ #define VIRTIO_BALLOON_F_FREE_PAGE_HINT 3 /* VQ to report free pages */ #define VIRTIO_BALLOON_F_PAGE_POISON 4 /* Guest is using page poisoning */ #define VIRTIO_BALLOON_F_REPORTING 5 /* Page reporting virtqueue */ +#define VIRTIO_BALLOON_F_HPG_REPORTING 6 /* Huge page reporting virtqueue */ /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 From patchwork Tue Dec 22 07:49:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liang Li X-Patchwork-Id: 11986079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.0 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D580C433E9 for ; Tue, 22 Dec 2020 07:50:39 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 18A512312E for ; Tue, 22 Dec 2020 07:50:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 18A512312E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:47260 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1krcRK-0000A1-6T for qemu-devel@archiver.kernel.org; Tue, 22 Dec 2020 02:50:38 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:34024) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1krcQ3-0007yq-3M for qemu-devel@nongnu.org; Tue, 22 Dec 2020 02:49:19 -0500 Received: from mail-pf1-x430.google.com ([2607:f8b0:4864:20::430]:45425) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1krcQ1-0005kq-F0 for qemu-devel@nongnu.org; Tue, 22 Dec 2020 02:49:18 -0500 Received: by mail-pf1-x430.google.com with SMTP id q22so7961052pfk.12 for ; Mon, 21 Dec 2020 23:49:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:date:to:cc:subject:message-id:mail-followup-to:mime-version :content-disposition:user-agent; bh=EteXygXMCMIuOH65MtjowE2DhA9SwUq1uE5SG/8bUGA=; b=LGvosuLiYLgn1b5YwdnJwfvCqo3XbN555xW2S/pLyAc8CuUqtZAGHNlX5OhLPeQRO4 C9wIUC7lj+M+iU2FFT1wsEhktcwBwq76j874fWliVKllFErvCXfAAxAKVekLKJ0fPuX4 w9BPNrCcyE8RLp4U32xhMH4/yOciZ7ZmUVQsi7kvByf34AwsJf2PEPp8adJg0WcVurN9 y91b1qv/triV2c+4b/y4HigHiL/Sk+sYpuqdkd4PTNVp0q5YaDIas0IN7YMD2gFHzTrx eV/AGLh5tP8pePBFiPGWjo4RlSKftL1n+xJaqNVLxg/eXbhAbpkDrS1x2L8CHRsIt1/n D+sA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:date:to:cc:subject:message-id :mail-followup-to:mime-version:content-disposition:user-agent; bh=EteXygXMCMIuOH65MtjowE2DhA9SwUq1uE5SG/8bUGA=; b=SZ0XaM7zNi2TkEY/PTOJCqmtjCwBrKfNyf1i73g3zHmqg/VWMWvlM5hZDZgwS+y0pW a5hLQyU3YoYw+WvpLj4Ls7g4be/9l7ZfppoQH6e6F9UQOmqxRbpPbsenfUP5S53NTXT5 ucyK3GGOOi8D5v5FJh7PwbHt/4IXZwlPhPB3Wi2oGY/ni2WqCCrEkP5bsjUZGt3IFAjC tFEGi9yVM5bhucQHA4+mFg6wGUmjPGSTgyP5GNQ8DneQJTloElOzquhOdTuPX5FWamxw tjvuzR7Wdcw40fVGhRcEvIcA5XEVmBjikvL/VN+dbKzPWasTcUoNGn0iJO1bremIj7jy BnCA== X-Gm-Message-State: AOAM532ofkWc4kgZaym5NELwVSBrhXQUL94sBlwFFf0Z21KUrkHMiaLe iUlhVlsrua1wAR9ExI5R36c= X-Google-Smtp-Source: ABdhPJzBJLHkMIuEbtGr1+LpiBstQirZrxRohNTninbGDYKqZqqMHLh/V9kjyaXQ/gbGgwJskKjqfQ== X-Received: by 2002:a63:445a:: with SMTP id t26mr18659351pgk.402.1608623356116; Mon, 21 Dec 2020 23:49:16 -0800 (PST) Received: from open-light-1.localdomain (66.98.113.28.16clouds.com. [66.98.113.28]) by smtp.gmail.com with ESMTPSA id b189sm19198904pfb.194.2020.12.21.23.49.14 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 21 Dec 2020 23:49:15 -0800 (PST) From: Liang Li X-Google-Original-From: Liang Li Date: Tue, 22 Dec 2020 02:49:13 -0500 To: Alexander Duyck , Mel Gorman , Andrew Morton , Andrea Arcangeli , Dan Williams , "Michael S. Tsirkin" , David Hildenbrand , Jason Wang , Dave Hansen , Michal Hocko , Liang Li , Mike Kravetz , Liang Li Subject: [RFC PATCH 3/3] mm: support free hugepage pre zero out Message-ID: <20201222074910.GA30051@open-light-1.localdomain> Mail-Followup-To: Alexander Duyck , Mel Gorman , Andrew Morton , Andrea Arcangeli , Dan Williams , "Michael S. Tsirkin" , David Hildenbrand , Jason Wang , Dave Hansen , Michal Hocko , Liang Li , Mike Kravetz , Liang Li , linux-mm@kvack.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, qemu-devel@nongnu.org MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) Received-SPF: pass client-ip=2607:f8b0:4864:20::430; envelope-from=liliang.opensource@gmail.com; helo=mail-pf1-x430.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-devel@nongnu.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" This patch add support of pre zero out free hugepage, we can use this feature to speed up page population and page fault handing. Cc: Alexander Duyck Cc: Mel Gorman Cc: Andrea Arcangeli Cc: Dan Williams Cc: Dave Hansen Cc: David Hildenbrand Cc: Michal Hocko Cc: Andrew Morton Cc: Alex Williamson Cc: Michael S. Tsirkin Cc: Jason Wang Cc: Mike Kravetz Cc: Liang Li Signed-off-by: Liang Li --- mm/page_prezero.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/mm/page_prezero.c b/mm/page_prezero.c index c8ce720bfc54..dff4e0adf402 100644 --- a/mm/page_prezero.c +++ b/mm/page_prezero.c @@ -26,6 +26,7 @@ static unsigned long delay_millisecs = 1000; static unsigned long zeropage_enable __read_mostly; static DEFINE_MUTEX(kzeropaged_mutex); static struct page_reporting_dev_info zero_page_dev_info; +static struct page_reporting_dev_info zero_hugepage_dev_info; inline void clear_zero_page_flag(struct page *page, int order) { @@ -69,9 +70,17 @@ static int start_kzeropaged(void) zero_page_dev_info.delay_jiffies = msecs_to_jiffies(delay_millisecs); err = page_reporting_register(&zero_page_dev_info); + + zero_hugepage_dev_info.report = zero_free_pages; + zero_hugepage_dev_info.mini_order = mini_page_order; + zero_hugepage_dev_info.batch_size = batch_size; + zero_hugepage_dev_info.delay_jiffies = msecs_to_jiffies(delay_millisecs); + + err |= hugepage_reporting_register(&zero_hugepage_dev_info); pr_info("Zero page enabled\n"); } else { page_reporting_unregister(&zero_page_dev_info); + hugepage_reporting_unregister(&zero_hugepage_dev_info); pr_info("Zero page disabled\n"); } @@ -90,7 +99,15 @@ static int restart_kzeropaged(void) zero_page_dev_info.batch_size = batch_size; zero_page_dev_info.delay_jiffies = msecs_to_jiffies(delay_millisecs); + hugepage_reporting_unregister(&zero_hugepage_dev_info); + + zero_hugepage_dev_info.report = zero_free_pages; + zero_hugepage_dev_info.mini_order = mini_page_order; + zero_hugepage_dev_info.batch_size = batch_size; + zero_hugepage_dev_info.delay_jiffies = msecs_to_jiffies(delay_millisecs); + err = page_reporting_register(&zero_page_dev_info); + err |= hugepage_reporting_register(&zero_hugepage_dev_info); pr_info("Zero page enabled\n"); }