From patchwork Mon May 6 06:56:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10930525 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E325C1708 for ; Mon, 6 May 2019 06:58:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D13DA28590 for ; Mon, 6 May 2019 06:58:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C59502860A; Mon, 6 May 2019 06:58:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 69A7528627 for ; Mon, 6 May 2019 06:58:42 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYv-0002c9-1C; Mon, 06 May 2019 06:57:21 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYl-0002FD-B7 for xen-devel@lists.xenproject.org; Mon, 06 May 2019 06:57:11 +0000 X-Inumbo-ID: 26ebd8d4-6fcc-11e9-9cee-1bf5b8f7eaa9 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 26ebd8d4-6fcc-11e9-9cee-1bf5b8f7eaa9; Mon, 06 May 2019 06:57:03 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 8B1B3AF3B; Mon, 6 May 2019 06:56:58 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 6 May 2019 08:56:35 +0200 Message-Id: <20190506065644.7415-37-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190506065644.7415-1-jgross@suse.com> References: <20190506065644.7415-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC V2 36/45] x86: make loading of GDT at context switch more modular X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Andrew Cooper , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP In preparation for core scheduling carve out the GDT related functionality (writing GDT related PTEs, loading default of full GDT) into sub-functions. Signed-off-by: Juergen Gross Acked-by: Jan Beulich --- RFC V2: split off non-refactoring part --- xen/arch/x86/domain.c | 57 +++++++++++++++++++++++++++++++-------------------- 1 file changed, 35 insertions(+), 22 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 1525ccd8e5..72a365ff6a 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1619,6 +1619,37 @@ static inline bool need_full_gdt(const struct domain *d) return is_pv_domain(d) && !is_idle_domain(d); } +static inline void write_full_gdt_ptes(seg_desc_t *gdt, struct vcpu *v) +{ + unsigned long mfn = virt_to_mfn(gdt); + l1_pgentry_t *pl1e = pv_gdt_ptes(v); + unsigned int i; + + for ( i = 0; i < NR_RESERVED_GDT_PAGES; i++ ) + l1e_write(pl1e + FIRST_RESERVED_GDT_PAGE + i, + l1e_from_pfn(mfn + i, __PAGE_HYPERVISOR_RW)); +} + +static inline void load_full_gdt(struct vcpu *v, unsigned int cpu) +{ + struct desc_ptr gdt_desc; + + gdt_desc.limit = LAST_RESERVED_GDT_BYTE; + gdt_desc.base = GDT_VIRT_START(v); + + lgdt(&gdt_desc); +} + +static inline void load_default_gdt(seg_desc_t *gdt, unsigned int cpu) +{ + struct desc_ptr gdt_desc; + + gdt_desc.limit = LAST_RESERVED_GDT_BYTE; + gdt_desc.base = (unsigned long)(gdt - FIRST_RESERVED_GDT_ENTRY); + + lgdt(&gdt_desc); +} + static void __context_switch(void) { struct cpu_user_regs *stack_regs = guest_cpu_user_regs(); @@ -1627,7 +1658,6 @@ static void __context_switch(void) struct vcpu *n = current; struct domain *pd = p->domain, *nd = n->domain; seg_desc_t *gdt; - struct desc_ptr gdt_desc; ASSERT(p != n); ASSERT(!vcpu_cpu_dirty(n)); @@ -1669,25 +1699,13 @@ static void __context_switch(void) gdt = !is_pv_32bit_domain(nd) ? per_cpu(gdt_table, cpu) : per_cpu(compat_gdt_table, cpu); - if ( need_full_gdt(nd) ) - { - unsigned long mfn = virt_to_mfn(gdt); - l1_pgentry_t *pl1e = pv_gdt_ptes(n); - unsigned int i; - for ( i = 0; i < NR_RESERVED_GDT_PAGES; i++ ) - l1e_write(pl1e + FIRST_RESERVED_GDT_PAGE + i, - l1e_from_pfn(mfn + i, __PAGE_HYPERVISOR_RW)); - } + if ( need_full_gdt(nd) ) + write_full_gdt_ptes(gdt, n); if ( need_full_gdt(pd) && ((p->vcpu_id != n->vcpu_id) || !need_full_gdt(nd)) ) - { - gdt_desc.limit = LAST_RESERVED_GDT_BYTE; - gdt_desc.base = (unsigned long)(gdt - FIRST_RESERVED_GDT_ENTRY); - - lgdt(&gdt_desc); - } + load_default_gdt(gdt, cpu); write_ptbase(n); @@ -1700,12 +1718,7 @@ static void __context_switch(void) if ( need_full_gdt(nd) && ((p->vcpu_id != n->vcpu_id) || !need_full_gdt(pd)) ) - { - gdt_desc.limit = LAST_RESERVED_GDT_BYTE; - gdt_desc.base = GDT_VIRT_START(n); - - lgdt(&gdt_desc); - } + load_full_gdt(n, cpu); if ( pd != nd ) cpumask_clear_cpu(cpu, pd->dirty_cpumask);