From patchwork Mon Feb 22 05:35:19 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shuai Ruan X-Patchwork-Id: 8371911 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 09718C0554 for ; Mon, 22 Feb 2016 05:41:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2060D203A1 for ; Mon, 22 Feb 2016 05:41:04 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2B66A203AA for ; Mon, 22 Feb 2016 05:41:03 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aXjC6-0005vF-Uk; Mon, 22 Feb 2016 05:38:02 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aXjC5-0005uu-9L for xen-devel@lists.xen.org; Mon, 22 Feb 2016 05:38:01 +0000 Received: from [193.109.254.147] by server-13.bemta-14.messagelabs.com id D4/D5-08347-8BE9AC65; Mon, 22 Feb 2016 05:38:00 +0000 X-Env-Sender: shuai.ruan@linux.intel.com X-Msg-Ref: server-3.tower-27.messagelabs.com!1456119478!25258078!2 X-Originating-IP: [134.134.136.65] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 3282 invoked from network); 22 Feb 2016 05:37:59 -0000 Received: from mga03.intel.com (HELO mga03.intel.com) (134.134.136.65) by server-3.tower-27.messagelabs.com with SMTP; 22 Feb 2016 05:37:59 -0000 Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP; 21 Feb 2016 21:37:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,483,1449561600"; d="scan'208";a="750713948" Received: from rs-vmm.bj.intel.com ([10.238.135.71]) by orsmga003.jf.intel.com with ESMTP; 21 Feb 2016 21:37:56 -0800 From: Shuai Ruan To: xen-devel@lists.xen.org Date: Mon, 22 Feb 2016 13:35:19 +0800 Message-Id: <1456119321-10384-2-git-send-email-shuai.ruan@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1456119321-10384-1-git-send-email-shuai.ruan@linux.intel.com> References: <1456119321-10384-1-git-send-email-shuai.ruan@linux.intel.com> Cc: wei.liu2@citrix.com, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com, andrew.cooper3@citrix.com, ian.jackson@eu.citrix.com, jbeulich@suse.com, keir@xen.org Subject: [Xen-devel] [PATCH 1/3] x86/xsaves: caculate the xstate_comp_offsets base on xcomp_bv X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Previous patch using all available features caculate xstate_comp_offsets. This is wrong.This patch fix this bug by caculating the xstate_comp_offset based on xcomp_bv of current guest. Also, the xstate_comp_offset should take alignment into consideration. Signed-off-by: Shuai Ruan Reported-by: Jan Beulich --- xen/arch/x86/xstate.c | 29 +++++++++++++++++++++-------- xen/include/asm-x86/xstate.h | 1 + 2 files changed, 22 insertions(+), 8 deletions(-) diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c index 4f2fb8e..0e7643b 100644 --- a/xen/arch/x86/xstate.c +++ b/xen/arch/x86/xstate.c @@ -26,6 +26,7 @@ u64 __read_mostly xfeature_mask; static unsigned int *__read_mostly xstate_offsets; unsigned int *__read_mostly xstate_sizes; +static unsigned int *__read_mostly xstate_align; static unsigned int __read_mostly xstate_features; static unsigned int __read_mostly xstate_comp_offsets[sizeof(xfeature_mask)*8]; @@ -94,7 +95,7 @@ static bool_t xsave_area_compressed(const struct xsave_struct *xsave_area) static int setup_xstate_features(bool_t bsp) { - unsigned int leaf, tmp, eax, ebx; + unsigned int leaf, eax, ebx, ecx, edx; if ( bsp ) { @@ -106,34 +107,44 @@ static int setup_xstate_features(bool_t bsp) xstate_sizes = xzalloc_array(unsigned int, xstate_features); if ( !xstate_sizes ) return -ENOMEM; + + xstate_align = xzalloc_array(unsigned int, xstate_features); + if ( !xstate_align ) + return -ENOMEM; } for ( leaf = 2; leaf < xstate_features; leaf++ ) { if ( bsp ) + { cpuid_count(XSTATE_CPUID, leaf, &xstate_sizes[leaf], - &xstate_offsets[leaf], &tmp, &tmp); + &xstate_offsets[leaf], &ecx, &edx); + xstate_align[leaf] = ecx & XSTATE_ALIGN64; + } else { cpuid_count(XSTATE_CPUID, leaf, &eax, - &ebx, &tmp, &tmp); + &ebx, &ecx, &edx); BUG_ON(eax != xstate_sizes[leaf]); BUG_ON(ebx != xstate_offsets[leaf]); + BUG_ON((ecx & XSTATE_ALIGN64) != xstate_align[leaf]); } } return 0; } -static void __init setup_xstate_comp(void) +static void setup_xstate_comp(const struct xsave_struct *xsave) { unsigned int i; + u64 xcomp_bv = xsave->xsave_hdr.xcomp_bv; /* * The FP xstates and SSE xstates are legacy states. They are always * in the fixed offsets in the xsave area in either compacted form * or standard form. */ + memset(xstate_comp_offsets, 0, sizeof(xstate_comp_offsets)); xstate_comp_offsets[0] = 0; xstate_comp_offsets[1] = XSAVE_SSE_OFFSET; @@ -141,8 +152,10 @@ static void __init setup_xstate_comp(void) for ( i = 3; i < xstate_features; i++ ) { - xstate_comp_offsets[i] = xstate_comp_offsets[i - 1] + - (((1ul << i) & xfeature_mask) + xstate_comp_offsets[i] = (xstate_align[i] ? + ROUNDUP(xstate_comp_offsets[i-1], 64) : + xstate_comp_offsets[i - 1]) + + (((1ul << i) & xcomp_bv) ? xstate_sizes[i - 1] : 0); ASSERT(xstate_comp_offsets[i] + xstate_sizes[i] <= xsave_cntxt_size); } @@ -172,6 +185,7 @@ void expand_xsave_states(struct vcpu *v, void *dest, unsigned int size) } ASSERT(xsave_area_compressed(xsave)); + setup_xstate_comp(xsave); /* * Copy legacy XSAVE area and XSAVE hdr area. */ @@ -223,6 +237,7 @@ void compress_xsave_states(struct vcpu *v, const void *src, unsigned int size) xsave->xsave_hdr.xstate_bv = xstate_bv; xsave->xsave_hdr.xcomp_bv = v->arch.xcr0_accum | XSTATE_COMPACTION_ENABLED; + setup_xstate_comp(xsave); /* * Copy each region from the non-compacted offset to the * possibly compacted offset. @@ -568,8 +583,6 @@ void xstate_init(struct cpuinfo_x86 *c) if ( setup_xstate_features(bsp) && bsp ) BUG(); - if ( bsp && (cpu_has_xsaves || cpu_has_xsavec) ) - setup_xstate_comp(); } static bool_t valid_xcr0(u64 xcr0) diff --git a/xen/include/asm-x86/xstate.h b/xen/include/asm-x86/xstate.h index 84f0af9..0215070 100644 --- a/xen/include/asm-x86/xstate.h +++ b/xen/include/asm-x86/xstate.h @@ -44,6 +44,7 @@ #define XSTATE_LAZY (XSTATE_ALL & ~XSTATE_NONLAZY) #define XSTATE_COMPACTION_ENABLED (1ULL << 63) +#define XSTATE_ALIGN64 (1ULL << 1) extern u64 xfeature_mask; extern unsigned int *xstate_sizes;