From patchwork Mon May 6 06:56:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10930511 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1E1B21390 for ; Mon, 6 May 2019 06:58:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0D3C828590 for ; Mon, 6 May 2019 06:58:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 015C02861E; Mon, 6 May 2019 06:58:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9E8E328590 for ; Mon, 6 May 2019 06:58:36 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYt-0002XM-EA; Mon, 06 May 2019 06:57:19 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYk-0002Cl-5u for xen-devel@lists.xenproject.org; Mon, 06 May 2019 06:57:10 +0000 X-Inumbo-ID: 271b43fa-6fcc-11e9-843c-bc764e045a96 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 271b43fa-6fcc-11e9-843c-bc764e045a96; Mon, 06 May 2019 06:57:03 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id B9250AF3E; Mon, 6 May 2019 06:56:58 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 6 May 2019 08:56:36 +0200 Message-Id: <20190506065644.7415-38-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190506065644.7415-1-jgross@suse.com> References: <20190506065644.7415-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC V2 37/45] x86: optimize loading of GDT at context switch X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Andrew Cooper , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Instead of dynamically decide whether the previous vcpu was using full or default GDT just add a percpu variable for that purpose. This at once removes the need for testing vcpu_ids to differ twice. Cache the need_full_gdt(nd) value in a local variable. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich --- RFC V2: new patch (split from previous one) --- xen/arch/x86/domain.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 72a365ff6a..d04e704116 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -72,6 +72,8 @@ DEFINE_PER_CPU(struct vcpu *, curr_vcpu); +static DEFINE_PER_CPU(bool, full_gdt_loaded); + static void default_idle(void); void (*pm_idle) (void) __read_mostly = default_idle; void (*dead_idle) (void) __read_mostly = default_dead_idle; @@ -1638,6 +1640,8 @@ static inline void load_full_gdt(struct vcpu *v, unsigned int cpu) gdt_desc.base = GDT_VIRT_START(v); lgdt(&gdt_desc); + + per_cpu(full_gdt_loaded, cpu) = true; } static inline void load_default_gdt(seg_desc_t *gdt, unsigned int cpu) @@ -1648,6 +1652,8 @@ static inline void load_default_gdt(seg_desc_t *gdt, unsigned int cpu) gdt_desc.base = (unsigned long)(gdt - FIRST_RESERVED_GDT_ENTRY); lgdt(&gdt_desc); + + per_cpu(full_gdt_loaded, cpu) = false; } static void __context_switch(void) @@ -1658,6 +1664,7 @@ static void __context_switch(void) struct vcpu *n = current; struct domain *pd = p->domain, *nd = n->domain; seg_desc_t *gdt; + bool need_full_gdt_n; ASSERT(p != n); ASSERT(!vcpu_cpu_dirty(n)); @@ -1700,11 +1707,13 @@ static void __context_switch(void) gdt = !is_pv_32bit_domain(nd) ? per_cpu(gdt_table, cpu) : per_cpu(compat_gdt_table, cpu); - if ( need_full_gdt(nd) ) + need_full_gdt_n = need_full_gdt(nd); + + if ( need_full_gdt_n ) write_full_gdt_ptes(gdt, n); - if ( need_full_gdt(pd) && - ((p->vcpu_id != n->vcpu_id) || !need_full_gdt(nd)) ) + if ( per_cpu(full_gdt_loaded, cpu) && + ((p->vcpu_id != n->vcpu_id) || !need_full_gdt_n) ) load_default_gdt(gdt, cpu); write_ptbase(n); @@ -1716,8 +1725,7 @@ static void __context_switch(void) svm_load_segs(0, 0, 0, 0, 0, 0, 0); #endif - if ( need_full_gdt(nd) && - ((p->vcpu_id != n->vcpu_id) || !need_full_gdt(pd)) ) + if ( need_full_gdt_n && !per_cpu(full_gdt_loaded, cpu) ) load_full_gdt(n, cpu); if ( pd != nd )