From patchwork Fri Mar 18 03:01:20 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shuai Ruan X-Patchwork-Id: 8615961 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7A5FB9F3D1 for ; Fri, 18 Mar 2016 03:07:14 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1DCCC2038A for ; Fri, 18 Mar 2016 03:07:13 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B609D20389 for ; Fri, 18 Mar 2016 03:07:11 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1agkhi-0003Xr-Db; Fri, 18 Mar 2016 03:03:58 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1agkhg-0003Xl-Q4 for xen-devel@lists.xen.org; Fri, 18 Mar 2016 03:03:57 +0000 Received: from [85.158.137.68] by server-3.bemta-3.messagelabs.com id A8/1A-03294-B107BE65; Fri, 18 Mar 2016 03:03:55 +0000 X-Env-Sender: shuai.ruan@linux.intel.com X-Msg-Ref: server-3.tower-31.messagelabs.com!1458270233!29889674!1 X-Originating-IP: [192.55.52.93] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n X-StarScan-Received: X-StarScan-Version: 8.11; banners=-,-,- X-VirusChecked: Checked Received: (qmail 38656 invoked from network); 18 Mar 2016 03:03:54 -0000 Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93) by server-3.tower-31.messagelabs.com with SMTP; 18 Mar 2016 03:03:54 -0000 Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP; 17 Mar 2016 20:03:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,352,1455004800"; d="scan'208";a="68713286" Received: from unknown (HELO rs-vmm.bj.intel.com) ([10.238.135.76]) by fmsmga004.fm.intel.com with ESMTP; 17 Mar 2016 20:03:51 -0700 From: Shuai Ruan To: xen-devel@lists.xen.org Date: Fri, 18 Mar 2016 11:01:20 +0800 Message-Id: <1458270080-19493-1-git-send-email-shuai.ruan@linux.intel.com> X-Mailer: git-send-email 1.9.1 Cc: andrew.cooper3@citrix.com, keir@xen.org, jbeulich@suse.com Subject: [Xen-devel] [V5] x86/xsaves: fix overwriting between non-lazy/lazy xsaves X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The offset at which components xsaved by xsave[sc] are not fixed. So when when a save with v->fpu_dirtied set is followed by one with v->fpu_dirtied clear, non-lazy xsave[sc] may overwriting data written by the lazy one. The solution is when using_xsave_compact is enabled and taking xcr0_accum into consideration, if guest has ever used XSTATE_LAZY & ~XSTATE_FP_SSE (XSTATE_FP_SSE will be excluded beacause xsave will write XSTATE_FP_SSE part in legacy region of xsave area which is fixed, saving XSTATE_FS_SSE will not cause overwriting problem), vcpu_xsave_mask will return XSTATE_ALL. Otherwise vcpu_xsave_mask will return XSTATE_NONLAZY. This may cause overhead save on lazy states which will cause performance impact. After doing some performance tests on xsavec and xsaveopt (suggested by jan), the results show xsaveopt performs better than xsavec. So hypervisor will not use xsavec anymore. xsaves will be used until supervised state is instroduced in hypervisor. And XSTATE_XSAVES_ONLY (indicates supervised state is understood in xen) is instroduced, the use of xsaves depend on whether XSTATE_XSAVES_ONLY is set in xcr0_accum. Signed-off-by: Shuai Ruan Reported-by: Jan Beulich --- v5: Address comments from Jan 1. Add XSTATE_XSAVES_ONLY and using xsaves depend on whether this bits are set in xcr0_accum 2. Change compress logic in compress_xsave_states() depend on !(v->arch.xcr0_accum & XSTATE_XSAVES_ONLY) && !xsave_area_compressed(src)). 3. XSTATE_COMPACTION_ENABLED only set in xrstor(). 4. Rebase the code on [V4] x86/xsaves: calculate the xstate_comp_offsets base on xstate_bv (already sent out) For they both change same code. (I am not sure whether this rebase is ok or not). v4: Address comments from Jan 1. Add synthetic CPU feature X86_FEATURE_XSAVE_COMPACT and use this feature indicate whether hypervisor use compact xsave area or not. 2. Fix type/grammer errors of the comment in vcpu_xsave_mask. v3: Address comments from Jan 1. Add xsavc clean up code and disable xsaves. 2. Add comment on why certain mask should be return in vcpu_xsave_mask. v2: Address comments from Jan add performance impact and next step to do in the description. xen/arch/x86/domain.c | 8 ---- xen/arch/x86/domctl.c | 6 +-- xen/arch/x86/hvm/hvm.c | 7 ---- xen/arch/x86/i387.c | 23 ++++++++--- xen/arch/x86/xstate.c | 90 ++++++++++++++++++++++++++++---------------- xen/include/asm-x86/xstate.h | 1 + 6 files changed, 78 insertions(+), 57 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index a6d721b..26dd1d3 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -948,14 +948,6 @@ int arch_set_info_guest( fpu_sse->fcw = FCW_DEFAULT; fpu_sse->mxcsr = MXCSR_DEFAULT; } - if ( cpu_has_xsaves ) - { - ASSERT(v->arch.xsave_area); - v->arch.xsave_area->xsave_hdr.xcomp_bv = XSTATE_COMPACTION_ENABLED | - v->arch.xsave_area->xsave_hdr.xstate_bv; - } - else if ( v->arch.xsave_area ) - v->arch.xsave_area->xsave_hdr.xcomp_bv = 0; if ( !compat ) { diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index b34a295..1a36a36 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -922,7 +922,7 @@ long arch_do_domctl( ret = -EFAULT; offset += sizeof(v->arch.xcr0_accum); - if ( !ret && (cpu_has_xsaves || cpu_has_xsavec) ) + if ( !ret ) { void *xsave_area; @@ -942,10 +942,6 @@ long arch_do_domctl( ret = -EFAULT; xfree(xsave_area); } - else if ( !ret && copy_to_guest_offset(evc->buffer, offset, - (void *)v->arch.xsave_area, - size - 2 * sizeof(uint64_t)) ) - ret = -EFAULT; vcpu_unpause(v); } diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 255a1d6..35e2c52 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -2202,9 +2202,6 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h) xsave_area->xsave_hdr.xstate_bv = 0; xsave_area->fpu_sse.mxcsr = MXCSR_DEFAULT; } - if ( cpu_has_xsaves && xsave_area ) - xsave_area->xsave_hdr.xcomp_bv = XSTATE_COMPACTION_ENABLED | - xsave_area->xsave_hdr.xstate_bv; v->arch.user_regs.eax = ctxt.rax; v->arch.user_regs.ebx = ctxt.rbx; @@ -5589,11 +5586,7 @@ void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip) fpu_ctxt->fcw = FCW_RESET; fpu_ctxt->mxcsr = MXCSR_DEFAULT; if ( v->arch.xsave_area ) - { v->arch.xsave_area->xsave_hdr.xstate_bv = XSTATE_FP; - v->arch.xsave_area->xsave_hdr.xcomp_bv = cpu_has_xsaves - ? XSTATE_COMPACTION_ENABLED | XSTATE_FP : 0; - } v->arch.vgc_flags = VGCF_online; memset(&v->arch.user_regs, 0, sizeof(v->arch.user_regs)); diff --git a/xen/arch/x86/i387.c b/xen/arch/x86/i387.c index c29d0fa..2afa762 100644 --- a/xen/arch/x86/i387.c +++ b/xen/arch/x86/i387.c @@ -118,7 +118,24 @@ static inline uint64_t vcpu_xsave_mask(const struct vcpu *v) if ( v->fpu_dirtied ) return v->arch.nonlazy_xstate_used ? XSTATE_ALL : XSTATE_LAZY; - return v->arch.nonlazy_xstate_used ? XSTATE_NONLAZY : 0; + ASSERT(v->arch.nonlazy_xstate_used); + + /* + * The offsets of components which live in the extended region of + * compact xsave area are not fixed. Xsave area may be overwritten + * when a xsave with v->fpu_dirtied set is followed by one with + * v->fpu_dirtied clear. + * In such case, if hypervisor uses compact xsave area and guest + * has ever used lazy states (checking xcr0_accum excluding + * XSTATE_FP_SSE), vcpu_xsave_mask will return XSTATE_ALL. Otherwise + * return XSTATE_NONLAZY. + * XSTATE_FP_SSE may be excluded, because the offsets of XSTATE_FP_SSE + * (in the legacy region of xsave area) are fixed, so saving + * XSTATE_FP_SSE will not cause overwriting problem. + */ + return (v->arch.xsave_area->xsave_hdr.xcomp_bv & XSTATE_COMPACTION_ENABLED) + && (v->arch.xcr0_accum & XSTATE_LAZY & ~XSTATE_FP_SSE) + ? XSTATE_ALL : XSTATE_NONLAZY; } /* Save x87 extended state */ @@ -275,11 +292,7 @@ int vcpu_init_fpu(struct vcpu *v) return rc; if ( v->arch.xsave_area ) - { v->arch.fpu_ctxt = &v->arch.xsave_area->fpu_sse; - if ( cpu_has_xsaves ) - v->arch.xsave_area->xsave_hdr.xcomp_bv = XSTATE_COMPACTION_ENABLED; - } else { BUILD_BUG_ON(__alignof(v->arch.xsave_area->fpu_sse) < 16); diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c index ef2c54d..61e5828 100644 --- a/xen/arch/x86/xstate.c +++ b/xen/arch/x86/xstate.c @@ -178,7 +178,7 @@ void expand_xsave_states(struct vcpu *v, void *dest, unsigned int size) u64 xstate_bv = xsave->xsave_hdr.xstate_bv; u64 valid; - if ( !cpu_has_xsaves && !cpu_has_xsavec ) + if ( !(xsave->xsave_hdr.xcomp_bv & XSTATE_COMPACTION_ENABLED) ) { memcpy(dest, xsave, size); return; @@ -222,22 +222,21 @@ void compress_xsave_states(struct vcpu *v, const void *src, unsigned int size) u64 xstate_bv = ((const struct xsave_struct *)src)->xsave_hdr.xstate_bv; u64 valid; - if ( !cpu_has_xsaves && !cpu_has_xsavec ) + if ( !(v->arch.xcr0_accum & XSTATE_XSAVES_ONLY) && + !xsave_area_compressed(src) ) { memcpy(xsave, src, size); return; } - ASSERT(!xsave_area_compressed(src)); /* * Copy legacy XSAVE area, to avoid complications with CPUID * leaves 0 and 1 in the loop below. */ memcpy(xsave, src, FXSAVE_SIZE); - /* Set XSTATE_BV and XCOMP_BV. */ + /* Set XSTATE_BV. */ xsave->xsave_hdr.xstate_bv = xstate_bv; - xsave->xsave_hdr.xcomp_bv = v->arch.xcr0_accum | XSTATE_COMPACTION_ENABLED; setup_xstate_comp(xstate_comp_offsets, xstate_bv); /* @@ -267,31 +266,35 @@ void xsave(struct vcpu *v, uint64_t mask) uint32_t hmask = mask >> 32; uint32_t lmask = mask; unsigned int fip_width = v->domain->arch.x87_fip_width; -#define XSAVE(pfx) \ - alternative_io_3(".byte " pfx "0x0f,0xae,0x27\n", /* xsave */ \ - ".byte " pfx "0x0f,0xae,0x37\n", /* xsaveopt */ \ - X86_FEATURE_XSAVEOPT, \ - ".byte " pfx "0x0f,0xc7,0x27\n", /* xsavec */ \ - X86_FEATURE_XSAVEC, \ - ".byte " pfx "0x0f,0xc7,0x2f\n", /* xsaves */ \ - X86_FEATURE_XSAVES, \ - "=m" (*ptr), \ - "a" (lmask), "d" (hmask), "D" (ptr)) +#define XSAVE(pfx, xsave_ins) \ + asm volatile ( ".byte " pfx xsave_ins \ + : "=m" (*ptr) \ + : "a" (lmask), "d" (hmask), "D" (ptr) ) if ( fip_width == 8 || !(mask & XSTATE_FP) ) { - XSAVE("0x48,"); + if ( v->arch.xcr0_accum & XSTATE_XSAVES_ONLY ) + XSAVE("0x48,", "0x0f,0xc7,0x2f"); /* xsaves */ + else if ( cpu_has_xsaveopt ) + XSAVE("0x48,", "0x0f,0xae,0x37"); /* xsaveopt */ + else + XSAVE("0x48,", "0x0f,0xae,0x27"); /* xsave */ } else if ( fip_width == 4 ) { - XSAVE(""); + if ( v->arch.xcr0_accum & XSTATE_XSAVES_ONLY ) + XSAVE("", "0x0f,0xc7,0x2f"); + else if ( cpu_has_xsaveopt ) + XSAVE("", "0x0f,0xae,0x37"); + else + XSAVE("", "0x0f,0xae,0x27"); } else { typeof(ptr->fpu_sse.fip.sel) fcs = ptr->fpu_sse.fip.sel; typeof(ptr->fpu_sse.fdp.sel) fds = ptr->fpu_sse.fdp.sel; - if ( cpu_has_xsaveopt || cpu_has_xsaves ) + if ( cpu_has_xsaveopt || (v->arch.xcr0_accum & XSTATE_XSAVES_ONLY) ) { /* * XSAVEOPT/XSAVES may not write the FPU portion even when the @@ -307,7 +310,12 @@ void xsave(struct vcpu *v, uint64_t mask) } } - XSAVE("0x48,"); + if ( v->arch.xcr0_accum & XSTATE_XSAVES_ONLY ) + XSAVE("0x48,", "0x0f,0xc7,0x2f"); + else if ( cpu_has_xsaveopt ) + XSAVE("0x48,", "0x0f,0xae,0x37"); + else + XSAVE("0x48,", "0x0f,0xae,0x27"); if ( !(mask & ptr->xsave_hdr.xstate_bv & XSTATE_FP) || /* @@ -317,7 +325,8 @@ void xsave(struct vcpu *v, uint64_t mask) (!(ptr->fpu_sse.fsw & 0x0080) && boot_cpu_data.x86_vendor == X86_VENDOR_AMD) ) { - if ( cpu_has_xsaveopt || cpu_has_xsaves ) + if ( cpu_has_xsaveopt || + (v->arch.xcr0_accum & XSTATE_XSAVES_ONLY) ) { ptr->fpu_sse.fip.sel = fcs; ptr->fpu_sse.fdp.sel = fds; @@ -378,25 +387,42 @@ void xrstor(struct vcpu *v, uint64_t mask) switch ( __builtin_expect(ptr->fpu_sse.x[FPU_WORD_SIZE_OFFSET], 8) ) { BUILD_BUG_ON(sizeof(faults) != 4); /* Clang doesn't support %z in asm. */ -#define XRSTOR(pfx) \ - alternative_io("1: .byte " pfx "0x0f,0xae,0x2f\n" \ +#define XRSTOR(pfx, xrstor_ins) \ + asm volatile ( "1: .byte " pfx xrstor_ins"\n" \ "3:\n" \ " .section .fixup,\"ax\"\n" \ "2: incl %[faults]\n" \ " jmp 3b\n" \ " .previous\n" \ - _ASM_EXTABLE(1b, 2b), \ - ".byte " pfx "0x0f,0xc7,0x1f\n", \ - X86_FEATURE_XSAVES, \ - ASM_OUTPUT2([mem] "+m" (*ptr), [faults] "+g" (faults)), \ - [lmask] "a" (lmask), [hmask] "d" (hmask), \ - [ptr] "D" (ptr)) + _ASM_EXTABLE(1b, 2b) \ + : [mem] "+m" (*ptr), [faults] "+g" (faults) \ + : [lmask] "a" (lmask), [hmask] "d" (hmask), \ + [ptr] "D" (ptr) ) default: - XRSTOR("0x48,"); + if ( v->arch.xcr0_accum & XSTATE_XSAVES_ONLY ) + { + if ( unlikely(!(ptr->xsave_hdr.xcomp_bv + & XSTATE_COMPACTION_ENABLED)) ) + ptr->xsave_hdr.xcomp_bv = ptr->xsave_hdr.xstate_bv + | XSTATE_COMPACTION_ENABLED; + + XRSTOR("0x48,","0x0f,0xc7,0x1f"); /* xrstors */ + } + else + XRSTOR("0x48,","0x0f,0xae,0x2f"); /* xrstor */ break; case 4: case 2: - XRSTOR(""); + if ( v->arch.xcr0_accum & XSTATE_XSAVES_ONLY ) + { + if ( unlikely(!(ptr->xsave_hdr.xcomp_bv + & XSTATE_COMPACTION_ENABLED)) ) + ptr->xsave_hdr.xcomp_bv = ptr->xsave_hdr.xstate_bv + | XSTATE_COMPACTION_ENABLED; + XRSTOR("","0x0f,0xc7,0x1f"); + } + else + XRSTOR("","0x0f,0xae,0x2f"); break; #undef XRSTOR } @@ -426,7 +452,7 @@ void xrstor(struct vcpu *v, uint64_t mask) ((mask & XSTATE_YMM) && !(ptr->xsave_hdr.xcomp_bv & XSTATE_COMPACTION_ENABLED))) ) ptr->fpu_sse.mxcsr &= mxcsr_mask; - if ( cpu_has_xsaves || cpu_has_xsavec ) + if ( v->arch.xcr0_accum & XSTATE_XSAVES_ONLY ) { ptr->xsave_hdr.xcomp_bv &= this_cpu(xcr0) | this_cpu(xss); ptr->xsave_hdr.xstate_bv &= ptr->xsave_hdr.xcomp_bv; @@ -443,7 +469,7 @@ void xrstor(struct vcpu *v, uint64_t mask) case 2: /* Stage 2: Reset all state. */ ptr->fpu_sse.mxcsr = MXCSR_DEFAULT; ptr->xsave_hdr.xstate_bv = 0; - ptr->xsave_hdr.xcomp_bv = cpu_has_xsaves + ptr->xsave_hdr.xcomp_bv = v->arch.xcr0_accum & XSTATE_XSAVES_ONLY ? XSTATE_COMPACTION_ENABLED : 0; continue; } diff --git a/xen/include/asm-x86/xstate.h b/xen/include/asm-x86/xstate.h index a488688..91d1c39 100644 --- a/xen/include/asm-x86/xstate.h +++ b/xen/include/asm-x86/xstate.h @@ -44,6 +44,7 @@ #define XSTATE_NONLAZY (XSTATE_LWP | XSTATE_BNDREGS | XSTATE_BNDCSR | \ XSTATE_PKRU) #define XSTATE_LAZY (XSTATE_ALL & ~XSTATE_NONLAZY) +#define XSTATE_XSAVES_ONLY 0 #define XSTATE_COMPACTION_ENABLED (1ULL << 63) #define XSTATE_ALIGN64 (1U << 1)