From patchwork Tue Jul 18 10:34:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Dyasli X-Patchwork-Id: 9847785 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 49DF7602A7 for ; Tue, 18 Jul 2017 10:48:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5412B26E4E for ; Tue, 18 Jul 2017 10:48:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 490522854A; Tue, 18 Jul 2017 10:48:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E154926E4E for ; Tue, 18 Jul 2017 10:48:30 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dXQ1J-0008UF-Ev; Tue, 18 Jul 2017 10:46:25 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dXQ1I-0008Th-FH for xen-devel@lists.xen.org; Tue, 18 Jul 2017 10:46:24 +0000 Received: from [85.158.137.68] by server-3.bemta-3.messagelabs.com id FA/12-01987-FF6ED695; Tue, 18 Jul 2017 10:46:23 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprIIsWRWlGSWpSXmKPExsXitHRDpO7/Z7m RBvt7BSyWfFzM4sDocXT3b6YAxijWzLyk/IoE1oyZX7YyFXyVrXj/5AdLA+MtsS5GTg4JAX+J uUuvMILYbAJ6Ehtnv2ICsUUEZCVWd81hB7GZBT4xSTQccwSxhQV8JN5N2AZWwyKgKvFz7kKwX l4BO4k9d9vYIGbKS+xqu8gKYnMCxec972QBsYUEbCVWX9jFCmGrSrx+sYsFoldQ4uTMJywQuy QkDr54wTyBkXcWktQsJKkFjEyrGDWKU4vKUot0Dc31kooy0zNKchMzc3QNDYz1clOLixPTU3M Sk4r1kvNzNzECg4cBCHYwvjzteYhRkoNJSZR39YHcSCG+pPyUyozE4oz4otKc1OJDjDIcHEoS vJOfAuUEi1LTUyvSMnOAYQyTluDgURLh/fwEKM1bXJCYW5yZDpE6xajL8WrC/29MQix5+XmpU uK88SAzBECKMkrz4EbAYuoSo6yUMC8j0FFCPAWpRbmZJajyrxjFORiVhHm/gEzhycwrgdv0Cu gIJqAjhH1zQI4oSURISTUwRvqe0Dwyy7FhDXOK2tojZn+3hkZevnjGTPNnmvqvzXV8m5eeKPi Rm1/b366UxF/MtHi72br4ID6tMIUN0lvMypffmqP39aL9RrP76VXNZ5mPMBvvWLB5am51qHJJ /aoF33xPzQ3li0rZuc8/yuMJQ2Kbgtp9rkP+lsu8HOK8eJiTnVI2n+JTYinOSDTUYi4qTgQAV gJ51aQCAAA= X-Env-Sender: prvs=365bef495=sergey.dyasli@citrix.com X-Msg-Ref: server-11.tower-31.messagelabs.com!1500374780!75077020!2 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 50857 invoked from network); 18 Jul 2017 10:46:22 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 18 Jul 2017 10:46:22 -0000 X-IronPort-AV: E=Sophos;i="5.40,377,1496102400"; d="scan'208";a="431882309" From: Sergey Dyasli To: Date: Tue, 18 Jul 2017 11:34:27 +0100 Message-ID: <20170718103429.25020-11-sergey.dyasli@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170718103429.25020-1-sergey.dyasli@citrix.com> References: <20170718103429.25020-1-sergey.dyasli@citrix.com> MIME-Version: 1.0 Cc: Sergey Dyasli , Kevin Tian , Jun Nakajima , George Dunlap , Andrew Cooper , Tim Deegan , Jan Beulich , Boris Ostrovsky , Suravee Suthikulpanit Subject: [Xen-devel] [PATCH RFC 10/12] x86/np2m: implement sharing of np2m between vCPUs X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Modify p2m_get_nestedp2m() to allow sharing a np2m between multiple vcpus with the same np2m_base (L1 EPTP value in VMCS12). np2m_schedule_in/out() callbacks are added to context_switch() as well as pseudo schedule-out is performed during virtual_vmexit(). Signed-off-by: Sergey Dyasli --- xen/arch/x86/domain.c | 2 ++ xen/arch/x86/hvm/vmx/vvmx.c | 4 ++++ xen/arch/x86/mm/p2m.c | 29 +++++++++++++++++++++++++++-- 3 files changed, 33 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index dd8bf1302f..38c86a5ded 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1642,6 +1642,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next) { _update_runstate_area(prev); vpmu_switch_from(prev); + np2m_schedule_out(); } if ( is_hvm_domain(prevd) && !list_empty(&prev->arch.hvm_vcpu.tm_list) ) @@ -1690,6 +1691,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next) /* Must be done with interrupts enabled */ vpmu_switch_to(next); + np2m_schedule_in(); } /* Ensure that the vcpu has an up-to-date time base. */ diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index 7b193767cd..2203d541ea 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1187,6 +1187,7 @@ static void virtual_vmentry(struct cpu_user_regs *regs) /* Setup virtual ETP for L2 guest*/ if ( nestedhvm_paging_mode_hap(v) ) + /* This will setup the initial np2m for the nested vCPU */ __vmwrite(EPT_POINTER, get_shadow_eptp(v)); else __vmwrite(EPT_POINTER, get_host_eptp(v)); @@ -1353,6 +1354,9 @@ static void virtual_vmexit(struct cpu_user_regs *regs) !(v->arch.hvm_vcpu.guest_efer & EFER_LMA) ) shadow_to_vvmcs_bulk(v, ARRAY_SIZE(gpdpte_fields), gpdpte_fields); + /* This will clear current pCPU bit in p2m->dirty_cpumask */ + np2m_schedule_out(); + vmx_vmcs_switch(v->arch.hvm_vmx.vmcs_pa, nvcpu->nv_n1vmcx_pa); nestedhvm_vcpu_exit_guestmode(v); diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 364fdd8c13..480459ae51 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1830,6 +1830,7 @@ p2m_get_nestedp2m_locked(struct vcpu *v) struct domain *d = v->domain; struct p2m_domain *p2m; uint64_t np2m_base = nhvm_vcpu_p2m_base(v); + unsigned int i; /* Mask out low bits; this avoids collisions with P2M_BASE_EADDR */ np2m_base &= ~(0xfffull); @@ -1843,10 +1844,34 @@ p2m_get_nestedp2m_locked(struct vcpu *v) if ( p2m ) { p2m_lock(p2m); - if ( p2m->np2m_base == np2m_base || p2m->np2m_base == P2M_BASE_EADDR ) + if ( p2m->np2m_base == np2m_base ) { - if ( p2m->np2m_base == P2M_BASE_EADDR ) + /* Check if np2m was flushed just before the lock */ + if ( nv->np2m_generation != p2m->np2m_generation ) nvcpu_flush(v); + /* np2m is up-to-date */ + p2m->np2m_base = np2m_base; + assign_np2m(v, p2m); + nestedp2m_unlock(d); + + return p2m; + } + else if ( p2m->np2m_base != P2M_BASE_EADDR ) + { + /* vCPU is switching from some other valid np2m */ + cpumask_clear_cpu(v->processor, p2m->dirty_cpumask); + } + p2m_unlock(p2m); + } + + /* Share a np2m if possible */ + for ( i = 0; i < MAX_NESTEDP2M; i++ ) + { + p2m = d->arch.nested_p2m[i]; + p2m_lock(p2m); + if ( p2m->np2m_base == np2m_base ) + { + nvcpu_flush(v); p2m->np2m_base = np2m_base; assign_np2m(v, p2m); nestedp2m_unlock(d);