From patchwork Thu Sep 8 05:30:03 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dongli Zhang X-Patchwork-Id: 9320397 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 691A760752 for ; Thu, 8 Sep 2016 05:33:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4C3F729588 for ; Thu, 8 Sep 2016 05:33:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 40BF52958A; Thu, 8 Sep 2016 05:33:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 95EDB29588 for ; Thu, 8 Sep 2016 05:33:47 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bhrvl-0006xX-Tq; Thu, 08 Sep 2016 05:31:21 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bhrvl-0006xR-6N for xen-devel@lists.xen.org; Thu, 08 Sep 2016 05:31:21 +0000 Received: from [85.158.143.35] by server-1.bemta-6.messagelabs.com id C0/B0-21406-8A7F0D75; Thu, 08 Sep 2016 05:31:20 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrCLMWRWlGSWpSXmKPExsUyZ7p8oO7y7xf CDSZf0bJY8nExiwOjx9Hdv5kCGKNYM/OS8isSWDMerL3GWjDFqGLZLb4Gxm2qXYxcHEICk5gk 1hxpZoJwvjJKPJk1nQXC2cAoMfvUcTYIp5tRom/DZKAMJwebgI7EtAOnwGwRAWmJa58vM4IUM Qu8Y5T4cOYfO0hCWCBJYtn+e2A2i4CqRMvTv0ANHBy8Au4SM9aHgYQlBOQkTh6bzAphG0u0v7 3INoGRZwEjwypGjeLUorLUIl0jC72kosz0jJLcxMwcXUMDM73c1OLixPTUnMSkYr3k/NxNjED fMwDBDsbzawMPMUpyMCmJ8voUXwgX4kvKT6nMSCzOiC8qzUktPsQow8GhJMH77ytQTrAoNT21 Ii0zBxiEMGkJDh4lEd6Ub0Bp3uKCxNzizHSI1ClGRSlx3rMgfQIgiYzSPLg2WOBfYpSVEuZlB DpEiKcgtSg3swRV/hWjOAejkjCvK8h4nsy8Erjpr4AWMwEtFjp1HmRxSSJCSqqBUXpejIbqgf KjIX8Pik86tS/QTzV16yVmq7VihyflGdT/jhGV9XpvbafTd3ZasdWp08XLpirv3ZXtke347LQ 4xzZXtjubxdXOdz5ifLPb40DFGXfzRV8P/NBc9cB8YvYCK59rNhmyi7k79jq2RnBUWE95pvzj XrPzy/DFHq0pyhYhd2J91Q9fUGIpzkg01GIuKk4EAD+SHxV3AgAA X-Env-Sender: dongli.zhang@oracle.com X-Msg-Ref: server-11.tower-21.messagelabs.com!1473312677!32102929!1 X-Originating-IP: [156.151.31.81] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 36656 invoked from network); 8 Sep 2016 05:31:19 -0000 Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81) by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 8 Sep 2016 05:31:19 -0000 Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id u885VBJY018800 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 8 Sep 2016 05:31:12 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id u885VB9N027484 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 8 Sep 2016 05:31:11 GMT Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25]) by aserv0121.oracle.com (8.13.8/8.13.8) with ESMTP id u885V8eK021353; Thu, 8 Sep 2016 05:31:09 GMT Received: from linux.cn.oracle.com (/10.182.70.252) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 07 Sep 2016 22:31:08 -0700 From: Dongli Zhang To: xen-devel@lists.xen.org Date: Thu, 8 Sep 2016 13:30:03 +0800 Message-Id: <1473312603-28581-1-git-send-email-dongli.zhang@oracle.com> X-Mailer: git-send-email 1.9.1 X-Source-IP: aserv0022.oracle.com [141.146.126.234] Cc: sstabellini@kernel.org, wei.liu2@citrix.com, george.dunlap@eu.citrix.com, ian.jackson@eu.citrix.com, dario.faggioli@citrix.com, tim@xen.org, david.vrabel@citrix.com, jbeulich@suse.com, andrew.cooper3@citrix.com Subject: [Xen-devel] [PATCH v3 1/1] xen: move TLB-flush filtering out into populate_physmap during vm creation X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch implemented parts of TODO left in commit id a902c12ee45fc9389eb8fe54eeddaf267a555c58. It moved TLB-flush filtering out into populate_physmap. Because of TLB-flush in alloc_heap_pages, it's very slow to create a guest with memory size of more than 100GB on host with 100+ cpus. This patch introduced a "MEMF_no_tlbflush" bit to memflags to indicate whether TLB-flush should be done in alloc_heap_pages or its caller populate_physmap. Once this bit is set in memflags, alloc_heap_pages will ignore TLB-flush. To use this bit after vm is created might lead to security issue, that is, this would make pages accessible to the guest B, when guest A may still have a cached mapping to them. Therefore, this patch also introduced a "already_scheduled" field to struct domain to indicate whether this domain has ever got scheduled by hypervisor. MEMF_no_tlbflush can be set only during vm creation phase when already_scheduled is still 0 before this domain gets scheduled for the first time. TODO: ballooning very huge amount of memory cannot benefit from this patch and might still be slow. Signed-off-by: Dongli Zhang --- Changed since v2: * Limit this optimization to domain creation time. --- xen/common/domain.c | 2 ++ xen/common/memory.c | 33 +++++++++++++++++++++++++++++++++ xen/common/page_alloc.c | 3 ++- xen/common/schedule.c | 5 +++++ xen/include/xen/mm.h | 2 ++ xen/include/xen/sched.h | 3 +++ 6 files changed, 47 insertions(+), 1 deletion(-) diff --git a/xen/common/domain.c b/xen/common/domain.c index a8804e4..611a471 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -303,6 +303,8 @@ struct domain *domain_create(domid_t domid, unsigned int domcr_flags, if ( !zalloc_cpumask_var(&d->domain_dirty_cpumask) ) goto fail; + d->already_scheduled = 0; + if ( domcr_flags & DOMCRF_hvm ) d->guest_type = guest_type_hvm; else if ( domcr_flags & DOMCRF_pvh ) diff --git a/xen/common/memory.c b/xen/common/memory.c index f34dd56..3641469 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -141,6 +141,8 @@ static void populate_physmap(struct memop_args *a) unsigned int i, j; xen_pfn_t gpfn, mfn; struct domain *d = a->domain, *curr_d = current->domain; + bool_t need_tlbflush = 0; + uint32_t tlbflush_timestamp = 0; if ( !guest_handle_subrange_okay(a->extent_list, a->nr_done, a->nr_extents-1) ) @@ -150,6 +152,12 @@ static void populate_physmap(struct memop_args *a) max_order(curr_d)) ) return; + /* MEMF_no_tlbflush can be set only during vm creation phase when + * already_scheduled is still 0 before this domain gets scheduled for + * the first time. */ + if ( d->already_scheduled == 0 ) + a->memflags |= MEMF_no_tlbflush; + for ( i = a->nr_done; i < a->nr_extents; i++ ) { if ( i != a->nr_done && hypercall_preempt_check() ) @@ -214,6 +222,21 @@ static void populate_physmap(struct memop_args *a) goto out; } + if ( d->already_scheduled == 0 ) + { + for ( j = 0; j < (1U << a->extent_order); j++ ) + { + if ( page[j].u.free.need_tlbflush && + (page[j].tlbflush_timestamp <= tlbflush_current_time()) && + (!need_tlbflush || + (page[j].tlbflush_timestamp > tlbflush_timestamp)) ) + { + need_tlbflush = 1; + tlbflush_timestamp = page[j].tlbflush_timestamp; + } + } + } + mfn = page_to_mfn(page); } @@ -232,6 +255,16 @@ static void populate_physmap(struct memop_args *a) } out: + if ( need_tlbflush ) + { + cpumask_t mask = cpu_online_map; + tlbflush_filter(mask, tlbflush_timestamp); + if ( !cpumask_empty(&mask) ) + { + perfc_incr(need_flush_tlb_flush); + flush_tlb_mask(&mask); + } + } a->nr_done = i; } diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 18ff6cf..e0283fc 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -827,7 +827,8 @@ static struct page_info *alloc_heap_pages( BUG_ON(pg[i].count_info != PGC_state_free); pg[i].count_info = PGC_state_inuse; - if ( pg[i].u.free.need_tlbflush && + if ( !(memflags & MEMF_no_tlbflush) && + pg[i].u.free.need_tlbflush && (pg[i].tlbflush_timestamp <= tlbflush_current_time()) && (!need_tlbflush || (pg[i].tlbflush_timestamp > tlbflush_timestamp)) ) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 32a300f..593541a 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -1376,6 +1376,11 @@ static void schedule(void) next = next_slice.task; + /* Set already_scheduled to 1 when this domain gets scheduled for the + * first time */ + if ( next->domain->already_scheduled == 0 ) + next->domain->already_scheduled = 1; + sd->curr = next; if ( next_slice.time >= 0 ) /* -ve means no limit */ diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index 58bc0b8..880ca88 100644 --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -221,6 +221,8 @@ struct npfec { #define MEMF_exact_node (1U<<_MEMF_exact_node) #define _MEMF_no_owner 5 #define MEMF_no_owner (1U<<_MEMF_no_owner) +#define _MEMF_no_tlbflush 6 +#define MEMF_no_tlbflush (1U<<_MEMF_no_tlbflush) #define _MEMF_node 8 #define MEMF_node_mask ((1U << (8 * sizeof(nodeid_t))) - 1) #define MEMF_node(n) ((((n) + 1) & MEMF_node_mask) << _MEMF_node) diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 2f9c15f..cbd8329 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -474,6 +474,9 @@ struct domain unsigned int guest_request_enabled : 1; unsigned int guest_request_sync : 1; } monitor; + + /* set to 1 the first time this domain gets scheduled. */ + bool_t already_scheduled; }; /* Protect updates/reads (resp.) of domain_list and domain_hash. */