From patchwork Thu Sep 26 09:46:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xia, Hongyan" X-Patchwork-Id: 11162097 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 28E6B13BD for ; Thu, 26 Sep 2019 09:49:38 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 04CD2222C0 for ; Thu, 26 Sep 2019 09:49:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="M3mIjdod" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 04CD2222C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDQOJ-00058w-CA; Thu, 26 Sep 2019 09:48:51 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDQOI-00058A-Oj for xen-devel@lists.xenproject.org; Thu, 26 Sep 2019 09:48:50 +0000 X-Inumbo-ID: d4a05875-e042-11e9-964c-12813bfff9fa Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25]) by localhost (Halon) with ESMTPS id d4a05875-e042-11e9-964c-12813bfff9fa; Thu, 26 Sep 2019 09:48:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1569491326; x=1601027326; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=oSh+TFhg7s/HzxQHWUccbH8nzVnoxH50O9hYc8duEPM=; b=M3mIjdodhMKLJ1uVHp4S7cEwrpZ0ayy6YHklIk+WrBWupxVpPZlFXxho SPp0Ayeg7/olu+sV+SNhpAgWJmCIrhoMhlBSL71tWLZ0BoaRUYSKSkeAi Kl1rdE6eU4WLe8rvU/NM55yDO+eurRfrZQQ3dZGSvJbgRpImKOaKCLCFY M=; X-IronPort-AV: E=Sophos;i="5.64,551,1559520000"; d="scan'208";a="753354226" Received: from iad6-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-1d-74cf8b49.us-east-1.amazon.com) ([10.124.125.2]) by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP; 26 Sep 2019 09:48:46 +0000 Received: from EX13MTAUWA001.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162]) by email-inbound-relay-1d-74cf8b49.us-east-1.amazon.com (Postfix) with ESMTPS id 5033BC1336; Thu, 26 Sep 2019 09:48:44 +0000 (UTC) Received: from EX13D19UWA003.ant.amazon.com (10.43.160.170) by EX13MTAUWA001.ant.amazon.com (10.43.160.58) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 09:48:06 +0000 Received: from EX13MTAUWA001.ant.amazon.com (10.43.160.58) by EX13D19UWA003.ant.amazon.com (10.43.160.170) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 09:48:06 +0000 Received: from u9d785c4ba99158.ant.amazon.com (10.125.106.58) by mail-relay.amazon.com (10.43.160.118) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 26 Sep 2019 09:48:05 +0000 From: To: Date: Thu, 26 Sep 2019 10:46:03 +0100 Message-ID: <895e7b00d76f84e78e2fb86538b12c0cb6c70b52.1569489002.git.hongyax@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [RFC PATCH 40/84] x86: switch root_pgt to mfn_t and use new APIs X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_?= =?utf-8?q?Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Wei Liu This then requires moving declaration of root page table mfn into mm.h and modify setup_cpu_root_pgt to have a single exit path. We also need to force map_domain_page to use direct map when switching per-domain mappings. This is contrary to our end goal of removing direct map, but this will be removed once we make map_domain_page context-switch safe in another (large) patch series. Signed-off-by: Wei Liu --- xen/arch/x86/domain.c | 15 ++++++++++--- xen/arch/x86/domain_page.c | 2 +- xen/arch/x86/mm.c | 2 +- xen/arch/x86/pv/domain.c | 2 +- xen/arch/x86/smpboot.c | 40 ++++++++++++++++++++++----------- xen/include/asm-x86/mm.h | 2 ++ xen/include/asm-x86/processor.h | 2 +- 7 files changed, 45 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 4b0ad5e15d..a11b05ea5a 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -69,6 +69,7 @@ #include #include #include +#include DEFINE_PER_CPU(struct vcpu *, curr_vcpu); @@ -1580,12 +1581,20 @@ void paravirt_ctxt_switch_from(struct vcpu *v) void paravirt_ctxt_switch_to(struct vcpu *v) { - root_pgentry_t *root_pgt = this_cpu(root_pgt); + mfn_t rpt_mfn = this_cpu(root_pgt_mfn); - if ( root_pgt ) - root_pgt[root_table_offset(PERDOMAIN_VIRT_START)] = + if ( !mfn_eq(rpt_mfn, INVALID_MFN) ) + { + root_pgentry_t *rpt; + + mapcache_override_current(INVALID_VCPU); + rpt = map_xen_pagetable_new(rpt_mfn); + rpt[root_table_offset(PERDOMAIN_VIRT_START)] = l4e_from_page(v->domain->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW); + UNMAP_XEN_PAGETABLE_NEW(rpt); + mapcache_override_current(NULL); + } if ( unlikely(v->arch.dr7 & DR7_ACTIVE_MASK) ) activate_debugregs(v); diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c index 24083e9a86..cfcffd35f3 100644 --- a/xen/arch/x86/domain_page.c +++ b/xen/arch/x86/domain_page.c @@ -57,7 +57,7 @@ static inline struct vcpu *mapcache_current_vcpu(void) return v; } -void __init mapcache_override_current(struct vcpu *v) +void mapcache_override_current(struct vcpu *v) { this_cpu(override) = v; } diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 59dba05ba8..302423a11f 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -530,7 +530,7 @@ void write_ptbase(struct vcpu *v) if ( is_pv_vcpu(v) && v->domain->arch.pv.xpti ) { cpu_info->root_pgt_changed = true; - cpu_info->pv_cr3 = __pa(this_cpu(root_pgt)); + cpu_info->pv_cr3 = mfn_to_maddr(this_cpu(root_pgt_mfn)); if ( new_cr4 & X86_CR4_PCIDE ) cpu_info->pv_cr3 |= get_pcid_bits(v, true); switch_cr3_cr4(v->arch.cr3, new_cr4); diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c index 4b6f48dea2..7e70690f03 100644 --- a/xen/arch/x86/pv/domain.c +++ b/xen/arch/x86/pv/domain.c @@ -360,7 +360,7 @@ static void _toggle_guest_pt(struct vcpu *v) if ( d->arch.pv.xpti ) { cpu_info->root_pgt_changed = true; - cpu_info->pv_cr3 = __pa(this_cpu(root_pgt)) | + cpu_info->pv_cr3 = mfn_to_maddr(this_cpu(root_pgt_mfn)) | (d->arch.pv.pcid ? get_pcid_bits(v, true) : 0); } diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c index b67432933d..f09563ab34 100644 --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -813,7 +813,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt) return rc; } -DEFINE_PER_CPU(root_pgentry_t *, root_pgt); +DEFINE_PER_CPU(mfn_t, root_pgt_mfn); static root_pgentry_t common_pgt; @@ -821,19 +821,27 @@ extern const char _stextentry[], _etextentry[]; static int setup_cpu_root_pgt(unsigned int cpu) { - root_pgentry_t *rpt; + root_pgentry_t *rpt = NULL; + mfn_t rpt_mfn; unsigned int off; int rc; if ( !opt_xpti_hwdom && !opt_xpti_domu ) - return 0; + { + rc = 0; + goto out; + } - rpt = alloc_xen_pagetable(); - if ( !rpt ) - return -ENOMEM; + rpt_mfn = alloc_xen_pagetable_new(); + if ( mfn_eq(rpt_mfn, INVALID_MFN) ) + { + rc = -ENOMEM; + goto out; + } + rpt = map_xen_pagetable_new(rpt_mfn); clear_page(rpt); - per_cpu(root_pgt, cpu) = rpt; + per_cpu(root_pgt_mfn, cpu) = rpt_mfn; rpt[root_table_offset(RO_MPT_VIRT_START)] = idle_pg_table[root_table_offset(RO_MPT_VIRT_START)]; @@ -850,7 +858,7 @@ static int setup_cpu_root_pgt(unsigned int cpu) rc = clone_mapping(ptr, rpt); if ( rc ) - return rc; + goto out; common_pgt = rpt[root_table_offset(XEN_VIRT_START)]; } @@ -875,19 +883,24 @@ static int setup_cpu_root_pgt(unsigned int cpu) if ( !rc ) rc = clone_mapping((void *)per_cpu(stubs.addr, cpu), rpt); + out: + UNMAP_XEN_PAGETABLE_NEW(rpt); return rc; } static void cleanup_cpu_root_pgt(unsigned int cpu) { - root_pgentry_t *rpt = per_cpu(root_pgt, cpu); + mfn_t rpt_mfn = per_cpu(root_pgt_mfn, cpu); + root_pgentry_t *rpt; unsigned int r; unsigned long stub_linear = per_cpu(stubs.addr, cpu); - if ( !rpt ) + if ( mfn_eq(rpt_mfn, INVALID_MFN) ) return; - per_cpu(root_pgt, cpu) = NULL; + per_cpu(root_pgt_mfn, cpu) = INVALID_MFN; + + rpt = map_xen_pagetable_new(rpt_mfn); for ( r = root_table_offset(DIRECTMAP_VIRT_START); r < root_table_offset(HYPERVISOR_VIRT_END); ++r ) @@ -932,7 +945,8 @@ static void cleanup_cpu_root_pgt(unsigned int cpu) free_xen_pagetable_new(l3t_mfn); } - free_xen_pagetable(rpt); + UNMAP_XEN_PAGETABLE_NEW(rpt); + free_xen_pagetable_new(rpt_mfn); /* Also zap the stub mapping for this CPU. */ if ( stub_linear ) @@ -1136,7 +1150,7 @@ void __init smp_prepare_cpus(void) rc = setup_cpu_root_pgt(0); if ( rc ) panic("Error %d setting up PV root page table\n", rc); - if ( per_cpu(root_pgt, 0) ) + if ( !mfn_eq(per_cpu(root_pgt_mfn, 0), INVALID_MFN) ) { get_cpu_info()->pv_cr3 = 0; diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 80173eb4c3..12a10b270d 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -646,4 +646,6 @@ void free_xen_pagetable_new(mfn_t mfn); l1_pgentry_t *virt_to_xen_l1e(unsigned long v); +DECLARE_PER_CPU(mfn_t, root_pgt_mfn); + #endif /* __ASM_X86_MM_H__ */ diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h index 3660238ca8..f571191cdb 100644 --- a/xen/include/asm-x86/processor.h +++ b/xen/include/asm-x86/processor.h @@ -465,7 +465,7 @@ static inline void disable_each_ist(idt_entry_t *idt) extern idt_entry_t idt_table[]; extern idt_entry_t *idt_tables[]; -DECLARE_PER_CPU(root_pgentry_t *, root_pgt); +DECLARE_PER_CPU(struct tss_struct, init_tss); extern void write_ptbase(struct vcpu *v);