From patchwork Tue Mar 19 09:21:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10859169 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 431AB1390 for ; Tue, 19 Mar 2019 09:23:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2942129248 for ; Tue, 19 Mar 2019 09:23:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1D6802936C; Tue, 19 Mar 2019 09:23:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4FCE22936B for ; Tue, 19 Mar 2019 09:23:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw2-0005mK-B2; Tue, 19 Mar 2019 09:21:26 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h6Aw0-0005kv-4q for xen-devel@lists.xenproject.org; Tue, 19 Mar 2019 09:21:24 +0000 X-Inumbo-ID: 5cc18b98-4a28-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 5cc18b98-4a28-11e9-bc90-bc764e045a96; Tue, 19 Mar 2019 09:21:22 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.58,497,1544486400"; d="scan'208";a="80974908" From: Paul Durrant To: Date: Tue, 19 Mar 2019 09:21:12 +0000 Message-ID: <20190319092116.1525-8-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190319092116.1525-1-paul.durrant@citrix.com> References: <20190319092116.1525-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v9 07/11] viridian: use viridian_map/unmap_guest_page() for reference tsc page X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Whilst the reference tsc page does not currently need to be kept mapped after it is initially set up (or updated after migrate), the code can be simplified by using the common guest page map/unmap and dump functions. New functionality added by a subsequent patch will also require the page to kept mapped for the lifetime of the domain. NOTE: Because the reference tsc page is per-domain rather than per-vcpu this patch also changes viridian_map_guest_page() to take a domain pointer rather than a vcpu pointer. The domain pointer cannot be const, unlike the vcpu pointer. Signed-off-by: Paul Durrant Reviewed-by: Wei Liu --- Cc: Jan Beulich Cc: Andrew Cooper Cc: "Roger Pau Monné" --- xen/arch/x86/hvm/viridian/private.h | 2 +- xen/arch/x86/hvm/viridian/synic.c | 6 ++- xen/arch/x86/hvm/viridian/time.c | 56 +++++++++------------------- xen/arch/x86/hvm/viridian/viridian.c | 3 +- xen/include/asm-x86/hvm/viridian.h | 2 +- 5 files changed, 25 insertions(+), 44 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h index 5078b2d2ab..96a784b840 100644 --- a/xen/arch/x86/hvm/viridian/private.h +++ b/xen/arch/x86/hvm/viridian/private.h @@ -111,7 +111,7 @@ void viridian_time_load_domain_ctxt( void viridian_dump_guest_page(const struct vcpu *v, const char *name, const struct viridian_page *vp); -void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp); +void viridian_map_guest_page(struct domain *d, struct viridian_page *vp); void viridian_unmap_guest_page(struct viridian_page *vp); #endif /* X86_HVM_VIRIDIAN_PRIVATE_H */ diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c index b8dab4b246..fb560bc162 100644 --- a/xen/arch/x86/hvm/viridian/synic.c +++ b/xen/arch/x86/hvm/viridian/synic.c @@ -81,6 +81,7 @@ void viridian_apic_assist_clear(const struct vcpu *v) int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) { struct viridian_vcpu *vv = v->arch.hvm.viridian; + struct domain *d = v->domain; switch ( idx ) { @@ -103,7 +104,7 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) vv->vp_assist.msr.raw = val; viridian_dump_guest_page(v, "VP_ASSIST", &vv->vp_assist); if ( vv->vp_assist.msr.enabled ) - viridian_map_guest_page(v, &vv->vp_assist); + viridian_map_guest_page(d, &vv->vp_assist); break; default: @@ -178,10 +179,11 @@ void viridian_synic_load_vcpu_ctxt( struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt) { struct viridian_vcpu *vv = v->arch.hvm.viridian; + struct domain *d = v->domain; vv->vp_assist.msr.raw = ctxt->vp_assist_msr; if ( vv->vp_assist.msr.enabled ) - viridian_map_guest_page(v, &vv->vp_assist); + viridian_map_guest_page(d, &vv->vp_assist); vv->apic_assist_pending = ctxt->apic_assist_pending; } diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c index 4399e62f54..16fe41d411 100644 --- a/xen/arch/x86/hvm/viridian/time.c +++ b/xen/arch/x86/hvm/viridian/time.c @@ -25,33 +25,10 @@ typedef struct _HV_REFERENCE_TSC_PAGE uint64_t Reserved2[509]; } HV_REFERENCE_TSC_PAGE, *PHV_REFERENCE_TSC_PAGE; -static void dump_reference_tsc(const struct domain *d) -{ - const union viridian_page_msr *rt = &d->arch.hvm.viridian->reference_tsc; - - if ( !rt->enabled ) - return; - - printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: pfn: %lx\n", - d->domain_id, (unsigned long)rt->pfn); -} - static void update_reference_tsc(struct domain *d, bool initialize) { - unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.pfn; - struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); - HV_REFERENCE_TSC_PAGE *p; - - if ( !page || !get_page_type(page, PGT_writable_page) ) - { - if ( page ) - put_page(page); - gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", - gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN)); - return; - } - - p = __map_domain_page(page); + const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc; + HV_REFERENCE_TSC_PAGE *p = rt->ptr; if ( initialize ) clear_page(p); @@ -82,7 +59,7 @@ static void update_reference_tsc(struct domain *d, bool initialize) printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: invalidated\n", d->domain_id); - goto out; + return; } /* @@ -100,11 +77,6 @@ static void update_reference_tsc(struct domain *d, bool initialize) if ( p->TscSequence == 0xFFFFFFFF || p->TscSequence == 0 ) /* Avoid both 'invalid' values */ p->TscSequence = 1; - - out: - unmap_domain_page(p); - - put_page_and_type(page); } static int64_t raw_trc_val(const struct domain *d) @@ -149,10 +121,14 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - vd->reference_tsc.raw = val; - dump_reference_tsc(d); - if ( vd->reference_tsc.enabled ) + viridian_unmap_guest_page(&vd->reference_tsc); + vd->reference_tsc.msr.raw = val; + viridian_dump_guest_page(v, "REFERENCE_TSC", &vd->reference_tsc); + if ( vd->reference_tsc.msr.enabled ) + { + viridian_map_guest_page(d, &vd->reference_tsc); update_reference_tsc(d, true); + } break; default: @@ -189,7 +165,7 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) ) return X86EMUL_EXCEPTION; - *val = vd->reference_tsc.raw; + *val = vd->reference_tsc.msr.raw; break; case HV_X64_MSR_TIME_REF_COUNT: @@ -231,6 +207,7 @@ void viridian_time_vcpu_deinit(const struct vcpu *v) void viridian_time_domain_deinit(const struct domain *d) { + viridian_unmap_guest_page(&d->arch.hvm.viridian->reference_tsc); } void viridian_time_save_vcpu_ctxt( @@ -249,7 +226,7 @@ void viridian_time_save_domain_ctxt( const struct viridian_domain *vd = d->arch.hvm.viridian; ctxt->time_ref_count = vd->time_ref_count.val; - ctxt->reference_tsc = vd->reference_tsc.raw; + ctxt->reference_tsc = vd->reference_tsc.msr.raw; } void viridian_time_load_domain_ctxt( @@ -258,10 +235,13 @@ void viridian_time_load_domain_ctxt( struct viridian_domain *vd = d->arch.hvm.viridian; vd->time_ref_count.val = ctxt->time_ref_count; - vd->reference_tsc.raw = ctxt->reference_tsc; + vd->reference_tsc.msr.raw = ctxt->reference_tsc; - if ( vd->reference_tsc.enabled ) + if ( vd->reference_tsc.msr.enabled ) + { + viridian_map_guest_page(d, &vd->reference_tsc); update_reference_tsc(d, false); + } } /* diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 742a988252..2b045ed88f 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -644,9 +644,8 @@ void viridian_dump_guest_page(const struct vcpu *v, const char *name, v, name, (unsigned long)vp->msr.pfn); } -void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp) +void viridian_map_guest_page(struct domain *d, struct viridian_page *vp) { - struct domain *d = v->domain; unsigned long gmfn = vp->msr.pfn; struct page_info *page; diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index abbbb36092..c65c044191 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -65,7 +65,7 @@ struct viridian_domain union viridian_guest_os_id_msr guest_os_id; union viridian_page_msr hypercall_gpa; struct viridian_time_ref_count time_ref_count; - union viridian_page_msr reference_tsc; + struct viridian_page reference_tsc; }; void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,