From patchwork Thu May 14 09:07:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11548339 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9EF25912 for ; Thu, 14 May 2020 09:08:24 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 85BF1206F1 for ; Thu, 14 May 2020 09:08:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 85BF1206F1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9pt-00011B-Qg; Thu, 14 May 2020 09:07:25 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9pr-000115-TM for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:07:23 +0000 X-Inumbo-ID: 51a12e22-95c2-11ea-a463-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 51a12e22-95c2-11ea-a463-12813bfff9fa; Thu, 14 May 2020 09:07:21 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 7DFE6ABE6; Thu, 14 May 2020 09:07:23 +0000 (UTC) Subject: [PATCH v9 1/9] x86emul: address x86_insn_is_mem_{access,write}() omissions From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: Message-ID: <9a1fdfad-d7e5-df0c-0bb5-8b8c609312d3@suse.com> Date: Thu, 14 May 2020 11:07:18 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" First of all explain in comments what the functions' purposes are. Then make them actually match their comments. Note that fc6fa977be54 ("x86emul: extend x86_insn_is_mem_write() coverage") didn't actually fix the function's behavior for {,V}STMXCSR: Both are covered by generic code higher up in the function, due to x86_decode_twobyte() already doing suitable adjustments. And VSTMXCSR wouldn't have been covered anyway without a further X86EMUL_OPC_VEX() case label. Keep the inner case label in a comment for reference. Signed-off-by: Jan Beulich --- v9: New. --- I'm intending to add testing of the functions to the harness, but this will take some more time. Possibly such a test harness addition could be acceptable even after the freeze point. --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -11438,25 +11438,62 @@ x86_insn_operand_ea(const struct x86_emu return state->ea.mem.off; } +/* + * This function means to return 'true' for all supported insns with explicit + * accesses to memory. This means also insns which don't have an explicit + * memory operand (like POP), but it does not mean e.g. segment selector + * loads, where the descriptor table access is considered an implicit one. + */ bool x86_insn_is_mem_access(const struct x86_emulate_state *state, const struct x86_emulate_ctxt *ctxt) { + if ( mode_64bit() && state->not_64bit ) + return false; + if ( state->ea.type == OP_MEM ) return ctxt->opcode != 0x8d /* LEA */ && + (ctxt->opcode & ~7) != X86EMUL_OPC(0x0f, 0x18) /* NOP space */ && (ctxt->opcode != X86EMUL_OPC(0x0f, 0x01) || (state->modrm_reg & 7) != 7) /* INVLPG */; switch ( ctxt->opcode ) { + case 0x06 ... 0x07: /* PUSH / POP %es */ + case 0x0e: /* PUSH %cs */ + case 0x16 ... 0x17: /* PUSH / POP %ss */ + case 0x1e ... 0x1f: /* PUSH / POP %ds */ + case 0x50 ... 0x5f: /* PUSH / POP reg */ + case 0x60 ... 0x61: /* PUSHA / POPA */ + case 0x68: case 0x6a: /* PUSH imm */ case 0x6c ... 0x6f: /* INS / OUTS */ + case 0x8f: /* POP r/m */ + case 0x9a: /* CALL (far, direct) */ + case 0x9c ... 0x9d: /* PUSHF / POPF */ case 0xa4 ... 0xa7: /* MOVS / CMPS */ case 0xaa ... 0xaf: /* STOS / LODS / SCAS */ + case 0xc2 ... 0xc3: /* RET (near) */ + case 0xc8 ... 0xc9: /* ENTER / LEAVE */ + case 0xca ... 0xcb: /* RET (far) */ case 0xd7: /* XLAT */ + case 0xe8: /* CALL (near, direct) */ + case X86EMUL_OPC(0x0f, 0xa0): /* PUSH %fs */ + case X86EMUL_OPC(0x0f, 0xa1): /* POP %fs */ + case X86EMUL_OPC(0x0f, 0xa8): /* PUSH %gs */ + case X86EMUL_OPC(0x0f, 0xa9): /* POP %gs */ CASE_SIMD_PACKED_INT_VEX(0x0f, 0xf7): /* MASKMOV{Q,DQU} */ /* VMASKMOVDQU */ return true; + case 0xff: + switch ( state->modrm_reg & 7 ) + { + case 2: /* CALL (near, indirect) */ + case 6: /* PUSH r/m */ + return true; + } + break; + case X86EMUL_OPC(0x0f, 0x01): /* Cover CLZERO. */ return (state->modrm_rm & 7) == 4 && (state->modrm_reg & 7) == 7; @@ -11465,10 +11502,20 @@ x86_insn_is_mem_access(const struct x86_ return false; } +/* + * This function means to return 'true' for all supported insns with explicit + * writes to memory. This means also insns which don't have an explicit + * memory operand (like PUSH), but it does not mean e.g. segment selector + * loads, where the (possible) descriptor table write is considered an + * implicit access. + */ bool x86_insn_is_mem_write(const struct x86_emulate_state *state, const struct x86_emulate_ctxt *ctxt) { + if ( mode_64bit() && state->not_64bit ) + return false; + switch ( state->desc & DstMask ) { case DstMem: @@ -11490,9 +11537,25 @@ x86_insn_is_mem_write(const struct x86_e switch ( ctxt->opcode ) { + case 0x63: /* ARPL */ + return !mode_64bit(); + + case 0x06: /* PUSH %es */ + case 0x0e: /* PUSH %cs */ + case 0x16: /* PUSH %ss */ + case 0x1e: /* PUSH %ds */ + case 0x50 ... 0x57: /* PUSH reg */ + case 0x60: /* PUSHA */ + case 0x68: case 0x6a: /* PUSH imm */ case 0x6c: case 0x6d: /* INS */ + case 0x9a: /* CALL (far, direct) */ + case 0x9c: /* PUSHF */ case 0xa4: case 0xa5: /* MOVS */ case 0xaa: case 0xab: /* STOS */ + case 0xc8: /* ENTER */ + case 0xe8: /* CALL (near, direct) */ + case X86EMUL_OPC(0x0f, 0xa0): /* PUSH %fs */ + case X86EMUL_OPC(0x0f, 0xa8): /* PUSH %gs */ case X86EMUL_OPC(0x0f, 0xab): /* BTS */ case X86EMUL_OPC(0x0f, 0xb3): /* BTR */ case X86EMUL_OPC(0x0f, 0xbb): /* BTC */ @@ -11550,6 +11613,16 @@ x86_insn_is_mem_write(const struct x86_e } break; + case 0xff: + switch ( state->modrm_reg & 7 ) + { + case 2: /* CALL (near, indirect) */ + case 3: /* CALL (far, indirect) */ + case 6: /* PUSH r/m */ + return true; + } + break; + case X86EMUL_OPC(0x0f, 0x01): switch ( state->modrm_reg & 7 ) { @@ -11564,7 +11637,7 @@ x86_insn_is_mem_write(const struct x86_e switch ( state->modrm_reg & 7 ) { case 0: /* FXSAVE */ - case 3: /* {,V}STMXCSR */ + /* case 3: STMXCSR - handled above */ case 4: /* XSAVE */ case 6: /* XSAVEOPT */ return true; From patchwork Thu May 14 09:07:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11548341 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A3F83912 for ; Thu, 14 May 2020 09:09:08 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7D0A420675 for ; Thu, 14 May 2020 09:09:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7D0A420675 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9qJ-000148-6g; Thu, 14 May 2020 09:07:51 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9qI-000142-KJ for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:07:50 +0000 X-Inumbo-ID: 6193c9f2-95c2-11ea-b9cf-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6193c9f2-95c2-11ea-b9cf-bc764e2007e4; Thu, 14 May 2020 09:07:48 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 6AB74AF33; Thu, 14 May 2020 09:07:50 +0000 (UTC) Subject: [PATCH v9 2/9] x86emul: disable FPU/MMX/SIMD insn emulation when !HVM From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: Message-ID: <7349e0e5-5347-e0cc-f661-df8961b2a2aa@suse.com> Date: Thu, 14 May 2020 11:07:45 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In a pure PV environment (the PV shim in particular) we don't really need emulation of all these. To limit #ifdef-ary utilize some of the CASE_*() macros we have, by providing variants expanding to (effectively) nothing (really a label, which in turn requires passing -Wno-unused-label to the compiler when build such configurations). Due to the mixture of macro and #ifdef use, the placement of some of the #ifdef-s is a little arbitrary. The resulting object file's .text is less than half the size of the original, and looks to also be compiling a little more quickly. This is meant as a first step; more parts can likely be disabled down the road. Suggested-by: Andrew Cooper Signed-off-by: Jan Beulich --- v7: Integrate into this series. Re-base. --- I'll be happy to take suggestions allowing to avoid -Wno-unused-label. --- a/xen/arch/x86/Makefile +++ b/xen/arch/x86/Makefile @@ -73,6 +73,9 @@ obj-y += vm_event.o obj-y += xstate.o extra-y += asm-macros.i +ifneq ($(CONFIG_HVM),y) +x86_emulate.o: CFLAGS-y += -Wno-unused-label +endif x86_emulate.o: x86_emulate/x86_emulate.c x86_emulate/x86_emulate.h efi-y := $(shell if [ ! -r $(BASEDIR)/include/xen/compile.h -o \ --- a/xen/arch/x86/x86_emulate.c +++ b/xen/arch/x86/x86_emulate.c @@ -42,6 +42,12 @@ } \ }) +#ifndef CONFIG_HVM +# define X86EMUL_NO_FPU +# define X86EMUL_NO_MMX +# define X86EMUL_NO_SIMD +#endif + #include "x86_emulate/x86_emulate.c" int x86emul_read_xcr(unsigned int reg, uint64_t *val, --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -3492,6 +3492,7 @@ x86_decode( op_bytes = 4; break; +#ifndef X86EMUL_NO_SIMD case simd_packed_int: switch ( vex.pfx ) { @@ -3557,6 +3558,7 @@ x86_decode( case simd_256: op_bytes = 32; break; +#endif /* !X86EMUL_NO_SIMD */ default: op_bytes = 0; @@ -3711,6 +3713,7 @@ x86_emulate( break; } +#ifndef X86EMUL_NO_SIMD /* With a memory operand, fetch the mask register in use (if any). */ if ( ea.type == OP_MEM && evex.opmsk && _get_fpu(fpu_type = X86EMUL_FPU_opmask, ctxt, ops) == X86EMUL_OKAY ) @@ -3741,6 +3744,7 @@ x86_emulate( put_fpu(X86EMUL_FPU_opmask, false, state, ctxt, ops); fpu_type = X86EMUL_FPU_none; } +#endif /* !X86EMUL_NO_SIMD */ /* Decode (but don't fetch) the destination operand: register or memory. */ switch ( d & DstMask ) @@ -4386,11 +4390,13 @@ x86_emulate( singlestep = _regs.eflags & X86_EFLAGS_TF; break; +#ifndef X86EMUL_NO_FPU case 0x9b: /* wait/fwait */ host_and_vcpu_must_have(fpu); get_fpu(X86EMUL_FPU_wait); emulate_fpu_insn_stub(b); break; +#endif case 0x9c: /* pushf */ if ( (_regs.eflags & X86_EFLAGS_VM) && @@ -4800,6 +4806,7 @@ x86_emulate( break; } +#ifndef X86EMUL_NO_FPU case 0xd8: /* FPU 0xd8 */ host_and_vcpu_must_have(fpu); get_fpu(X86EMUL_FPU_fpu); @@ -5134,6 +5141,7 @@ x86_emulate( } } break; +#endif /* !X86EMUL_NO_FPU */ case 0xe0 ... 0xe2: /* loop{,z,nz} */ { unsigned long count = get_loop_count(&_regs, ad_bytes); @@ -6079,6 +6087,8 @@ x86_emulate( case X86EMUL_OPC(0x0f, 0x19) ... X86EMUL_OPC(0x0f, 0x1f): /* nop */ break; +#ifndef X86EMUL_NO_MMX + case X86EMUL_OPC(0x0f, 0x0e): /* femms */ host_and_vcpu_must_have(3dnow); asm volatile ( "femms" ); @@ -6099,39 +6109,71 @@ x86_emulate( state->simd_size = simd_other; goto simd_0f_imm8; -#define CASE_SIMD_PACKED_INT(pfx, opc) \ +#endif /* !X86EMUL_NO_MMX */ + +#if !defined(X86EMUL_NO_SIMD) && !defined(X86EMUL_NO_MMX) +# define CASE_SIMD_PACKED_INT(pfx, opc) \ case X86EMUL_OPC(pfx, opc): \ case X86EMUL_OPC_66(pfx, opc) -#define CASE_SIMD_PACKED_INT_VEX(pfx, opc) \ +#elif !defined(X86EMUL_NO_SIMD) +# define CASE_SIMD_PACKED_INT(pfx, opc) \ + case X86EMUL_OPC_66(pfx, opc) +#elif !defined(X86EMUL_NO_MMX) +# define CASE_SIMD_PACKED_INT(pfx, opc) \ + case X86EMUL_OPC(pfx, opc) +#else +# define CASE_SIMD_PACKED_INT(pfx, opc) C##pfx##_##opc +#endif + +#ifndef X86EMUL_NO_SIMD + +# define CASE_SIMD_PACKED_INT_VEX(pfx, opc) \ CASE_SIMD_PACKED_INT(pfx, opc): \ case X86EMUL_OPC_VEX_66(pfx, opc) -#define CASE_SIMD_ALL_FP(kind, pfx, opc) \ +# define CASE_SIMD_ALL_FP(kind, pfx, opc) \ CASE_SIMD_PACKED_FP(kind, pfx, opc): \ CASE_SIMD_SCALAR_FP(kind, pfx, opc) -#define CASE_SIMD_PACKED_FP(kind, pfx, opc) \ +# define CASE_SIMD_PACKED_FP(kind, pfx, opc) \ case X86EMUL_OPC##kind(pfx, opc): \ case X86EMUL_OPC##kind##_66(pfx, opc) -#define CASE_SIMD_SCALAR_FP(kind, pfx, opc) \ +# define CASE_SIMD_SCALAR_FP(kind, pfx, opc) \ case X86EMUL_OPC##kind##_F3(pfx, opc): \ case X86EMUL_OPC##kind##_F2(pfx, opc) -#define CASE_SIMD_SINGLE_FP(kind, pfx, opc) \ +# define CASE_SIMD_SINGLE_FP(kind, pfx, opc) \ case X86EMUL_OPC##kind(pfx, opc): \ case X86EMUL_OPC##kind##_F3(pfx, opc) -#define CASE_SIMD_ALL_FP_VEX(pfx, opc) \ +# define CASE_SIMD_ALL_FP_VEX(pfx, opc) \ CASE_SIMD_ALL_FP(, pfx, opc): \ CASE_SIMD_ALL_FP(_VEX, pfx, opc) -#define CASE_SIMD_PACKED_FP_VEX(pfx, opc) \ +# define CASE_SIMD_PACKED_FP_VEX(pfx, opc) \ CASE_SIMD_PACKED_FP(, pfx, opc): \ CASE_SIMD_PACKED_FP(_VEX, pfx, opc) -#define CASE_SIMD_SCALAR_FP_VEX(pfx, opc) \ +# define CASE_SIMD_SCALAR_FP_VEX(pfx, opc) \ CASE_SIMD_SCALAR_FP(, pfx, opc): \ CASE_SIMD_SCALAR_FP(_VEX, pfx, opc) -#define CASE_SIMD_SINGLE_FP_VEX(pfx, opc) \ +# define CASE_SIMD_SINGLE_FP_VEX(pfx, opc) \ CASE_SIMD_SINGLE_FP(, pfx, opc): \ CASE_SIMD_SINGLE_FP(_VEX, pfx, opc) +#else + +# define CASE_SIMD_PACKED_INT_VEX(pfx, opc) \ + CASE_SIMD_PACKED_INT(pfx, opc) + +# define CASE_SIMD_ALL_FP(kind, pfx, opc) C##kind##pfx##_##opc +# define CASE_SIMD_PACKED_FP(kind, pfx, opc) Cp##kind##pfx##_##opc +# define CASE_SIMD_SCALAR_FP(kind, pfx, opc) Cs##kind##pfx##_##opc +# define CASE_SIMD_SINGLE_FP(kind, pfx, opc) C##kind##pfx##_##opc + +# define CASE_SIMD_ALL_FP_VEX(pfx, opc) CASE_SIMD_ALL_FP(, pfx, opc) +# define CASE_SIMD_PACKED_FP_VEX(pfx, opc) CASE_SIMD_PACKED_FP(, pfx, opc) +# define CASE_SIMD_SCALAR_FP_VEX(pfx, opc) CASE_SIMD_SCALAR_FP(, pfx, opc) +# define CASE_SIMD_SINGLE_FP_VEX(pfx, opc) CASE_SIMD_SINGLE_FP(, pfx, opc) + +#endif + CASE_SIMD_SCALAR_FP(, 0x0f, 0x2b): /* movnts{s,d} xmm,mem */ host_and_vcpu_must_have(sse4a); /* fall through */ @@ -6269,6 +6311,8 @@ x86_emulate( insn_bytes = EVEX_PFX_BYTES + 2; break; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_66(0x0f, 0x12): /* movlpd m64,xmm */ case X86EMUL_OPC_VEX_66(0x0f, 0x12): /* vmovlpd m64,xmm,xmm */ CASE_SIMD_PACKED_FP_VEX(0x0f, 0x13): /* movlp{s,d} xmm,m64 */ @@ -6375,6 +6419,8 @@ x86_emulate( avx512_vlen_check(false); goto simd_zmm; +#endif /* !X86EMUL_NO_SIMD */ + case X86EMUL_OPC(0x0f, 0x20): /* mov cr,reg */ case X86EMUL_OPC(0x0f, 0x21): /* mov dr,reg */ case X86EMUL_OPC(0x0f, 0x22): /* mov reg,cr */ @@ -6401,6 +6447,8 @@ x86_emulate( goto done; break; +#if !defined(X86EMUL_NO_MMX) && !defined(X86EMUL_NO_SIMD) + case X86EMUL_OPC_66(0x0f, 0x2a): /* cvtpi2pd mm/m64,xmm */ if ( ea.type == OP_REG ) { @@ -6412,6 +6460,8 @@ x86_emulate( op_bytes = (b & 4) && (vex.pfx & VEX_PREFIX_DOUBLE_MASK) ? 16 : 8; goto simd_0f_fp; +#endif /* !X86EMUL_NO_MMX && !X86EMUL_NO_SIMD */ + CASE_SIMD_SCALAR_FP_VEX(0x0f, 0x2a): /* {,v}cvtsi2s{s,d} r/m,xmm */ if ( vex.opcx == vex_none ) { @@ -6758,6 +6808,8 @@ x86_emulate( dst.val = src.val; break; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_VEX(0x0f, 0x4a): /* kadd{w,q} k,k,k */ if ( !vex.w ) host_and_vcpu_must_have(avx512dq); @@ -6812,6 +6864,8 @@ x86_emulate( generate_exception_if(!vex.l || vex.w, EXC_UD); goto opmask_common; +#endif /* X86EMUL_NO_SIMD */ + CASE_SIMD_PACKED_FP_VEX(0x0f, 0x50): /* movmskp{s,d} xmm,reg */ /* vmovmskp{s,d} {x,y}mm,reg */ CASE_SIMD_PACKED_INT_VEX(0x0f, 0xd7): /* pmovmskb {,x}mm,reg */ @@ -6895,6 +6949,8 @@ x86_emulate( evex.w); goto avx512f_all_fp; +#ifndef X86EMUL_NO_SIMD + CASE_SIMD_PACKED_FP_VEX(0x0f, 0x5b): /* cvt{ps,dq}2{dq,ps} xmm/mem,xmm */ /* vcvt{ps,dq}2{dq,ps} {x,y}mm/mem,{x,y}mm */ case X86EMUL_OPC_F3(0x0f, 0x5b): /* cvttps2dq xmm/mem,xmm */ @@ -6925,6 +6981,8 @@ x86_emulate( op_bytes = 16 << evex.lr; goto simd_zmm; +#endif /* !X86EMUL_NO_SIMD */ + CASE_SIMD_PACKED_INT_VEX(0x0f, 0x60): /* punpcklbw {,x}mm/mem,{,x}mm */ /* vpunpcklbw {x,y}mm/mem,{x,y}mm,{x,y}mm */ CASE_SIMD_PACKED_INT_VEX(0x0f, 0x61): /* punpcklwd {,x}mm/mem,{,x}mm */ @@ -6951,6 +7009,7 @@ x86_emulate( /* vpackusbw {x,y}mm/mem,{x,y}mm,{x,y}mm */ CASE_SIMD_PACKED_INT_VEX(0x0f, 0x6b): /* packsswd {,x}mm/mem,{,x}mm */ /* vpacksswd {x,y}mm/mem,{x,y}mm,{x,y}mm */ +#ifndef X86EMUL_NO_SIMD case X86EMUL_OPC_66(0x0f, 0x6c): /* punpcklqdq xmm/m128,xmm */ case X86EMUL_OPC_VEX_66(0x0f, 0x6c): /* vpunpcklqdq {x,y}mm/mem,{x,y}mm,{x,y}mm */ case X86EMUL_OPC_66(0x0f, 0x6d): /* punpckhqdq xmm/m128,xmm */ @@ -7035,6 +7094,7 @@ x86_emulate( /* vpsubd {x,y}mm/mem,{x,y}mm,{x,y}mm */ case X86EMUL_OPC_66(0x0f, 0xfb): /* psubq xmm/m128,xmm */ case X86EMUL_OPC_VEX_66(0x0f, 0xfb): /* vpsubq {x,y}mm/mem,{x,y}mm,{x,y}mm */ +#endif /* !X86EMUL_NO_SIMD */ CASE_SIMD_PACKED_INT_VEX(0x0f, 0xfc): /* paddb {,x}mm/mem,{,x}mm */ /* vpaddb {x,y}mm/mem,{x,y}mm,{x,y}mm */ CASE_SIMD_PACKED_INT_VEX(0x0f, 0xfd): /* paddw {,x}mm/mem,{,x}mm */ @@ -7042,6 +7102,7 @@ x86_emulate( CASE_SIMD_PACKED_INT_VEX(0x0f, 0xfe): /* paddd {,x}mm/mem,{,x}mm */ /* vpaddd {x,y}mm/mem,{x,y}mm,{x,y}mm */ simd_0f_int: +#ifndef X86EMUL_NO_SIMD if ( vex.opcx != vex_none ) { case X86EMUL_OPC_VEX_66(0x0f38, 0x00): /* vpshufb {x,y}mm/mem,{x,y}mm,{x,y}mm */ @@ -7083,11 +7144,14 @@ x86_emulate( } if ( vex.pfx ) goto simd_0f_sse2; +#endif /* !X86EMUL_NO_SIMD */ simd_0f_mmx: host_and_vcpu_must_have(mmx); get_fpu(X86EMUL_FPU_mmx); goto simd_0f_common; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_EVEX_66(0x0f, 0xf6): /* vpsadbw [xyz]mm/mem,[xyz]mm,[xyz]mm */ generate_exception_if(evex.opmsk, EXC_UD); /* fall through */ @@ -7181,6 +7245,8 @@ x86_emulate( generate_exception_if(!evex.w, EXC_UD); goto avx512f_no_sae; +#endif /* X86EMUL_NO_SIMD */ + CASE_SIMD_PACKED_INT_VEX(0x0f, 0x6e): /* mov{d,q} r/m,{,x}mm */ /* vmov{d,q} r/m,xmm */ CASE_SIMD_PACKED_INT_VEX(0x0f, 0x7e): /* mov{d,q} {,x}mm,r/m */ @@ -7222,6 +7288,8 @@ x86_emulate( ASSERT(!state->simd_size); break; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_EVEX_66(0x0f, 0x6e): /* vmov{d,q} r/m,xmm */ case X86EMUL_OPC_EVEX_66(0x0f, 0x7e): /* vmov{d,q} xmm,r/m */ generate_exception_if((evex.lr || evex.opmsk || evex.brs || @@ -7294,11 +7362,15 @@ x86_emulate( d |= TwoOp; /* fall through */ case X86EMUL_OPC_66(0x0f, 0xd6): /* movq xmm,xmm/m64 */ +#endif /* !X86EMUL_NO_SIMD */ +#ifndef X86EMUL_NO_MMX case X86EMUL_OPC(0x0f, 0x6f): /* movq mm/m64,mm */ case X86EMUL_OPC(0x0f, 0x7f): /* movq mm,mm/m64 */ +#endif op_bytes = 8; goto simd_0f_int; +#ifndef X86EMUL_NO_SIMD CASE_SIMD_PACKED_INT_VEX(0x0f, 0x70):/* pshuf{w,d} $imm8,{,x}mm/mem,{,x}mm */ /* vpshufd $imm8,{x,y}mm/mem,{x,y}mm */ case X86EMUL_OPC_F3(0x0f, 0x70): /* pshufhw $imm8,xmm/m128,xmm */ @@ -7307,12 +7379,15 @@ x86_emulate( case X86EMUL_OPC_VEX_F2(0x0f, 0x70): /* vpshuflw $imm8,{x,y}mm/mem,{x,y}mm */ d = (d & ~SrcMask) | SrcMem | TwoOp; op_bytes = vex.pfx ? 16 << vex.l : 8; +#endif simd_0f_int_imm8: if ( vex.opcx != vex_none ) { +#ifndef X86EMUL_NO_SIMD case X86EMUL_OPC_VEX_66(0x0f3a, 0x0e): /* vpblendw $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */ case X86EMUL_OPC_VEX_66(0x0f3a, 0x0f): /* vpalignr $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */ case X86EMUL_OPC_VEX_66(0x0f3a, 0x42): /* vmpsadbw $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */ +#endif if ( vex.l ) { simd_0f_imm8_avx2: @@ -7320,6 +7395,7 @@ x86_emulate( } else { +#ifndef X86EMUL_NO_SIMD case X86EMUL_OPC_VEX_66(0x0f3a, 0x08): /* vroundps $imm8,{x,y}mm/mem,{x,y}mm */ case X86EMUL_OPC_VEX_66(0x0f3a, 0x09): /* vroundpd $imm8,{x,y}mm/mem,{x,y}mm */ case X86EMUL_OPC_VEX_66(0x0f3a, 0x0a): /* vroundss $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */ @@ -7327,6 +7403,7 @@ x86_emulate( case X86EMUL_OPC_VEX_66(0x0f3a, 0x0c): /* vblendps $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */ case X86EMUL_OPC_VEX_66(0x0f3a, 0x0d): /* vblendpd $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */ case X86EMUL_OPC_VEX_66(0x0f3a, 0x40): /* vdpps $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */ +#endif simd_0f_imm8_avx: host_and_vcpu_must_have(avx); } @@ -7360,6 +7437,8 @@ x86_emulate( insn_bytes = PFX_BYTES + 3; break; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_EVEX_66(0x0f, 0x70): /* vpshufd $imm8,[xyz]mm/mem,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_F3(0x0f, 0x70): /* vpshufhw $imm8,[xyz]mm/mem,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_F2(0x0f, 0x70): /* vpshuflw $imm8,[xyz]mm/mem,[xyz]mm{k} */ @@ -7418,6 +7497,9 @@ x86_emulate( opc[1] = modrm; opc[2] = imm1; insn_bytes = PFX_BYTES + 3; + +#endif /* X86EMUL_NO_SIMD */ + simd_0f_reg_only: opc[insn_bytes - PFX_BYTES] = 0xc3; @@ -7428,6 +7510,8 @@ x86_emulate( ASSERT(!state->simd_size); break; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_EVEX_66(0x0f, 0x71): /* Grp12 */ switch ( modrm_reg & 7 ) { @@ -7459,6 +7543,9 @@ x86_emulate( } goto unrecognized_insn; +#endif /* !X86EMUL_NO_SIMD */ +#ifndef X86EMUL_NO_MMX + case X86EMUL_OPC(0x0f, 0x73): /* Grp14 */ switch ( modrm_reg & 7 ) { @@ -7468,6 +7555,9 @@ x86_emulate( } goto unrecognized_insn; +#endif /* !X86EMUL_NO_MMX */ +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_66(0x0f, 0x73): case X86EMUL_OPC_VEX_66(0x0f, 0x73): switch ( modrm_reg & 7 ) @@ -7498,7 +7588,12 @@ x86_emulate( } goto unrecognized_insn; +#endif /* !X86EMUL_NO_SIMD */ + +#ifndef X86EMUL_NO_MMX case X86EMUL_OPC(0x0f, 0x77): /* emms */ +#endif +#ifndef X86EMUL_NO_SIMD case X86EMUL_OPC_VEX(0x0f, 0x77): /* vzero{all,upper} */ if ( vex.opcx != vex_none ) { @@ -7544,6 +7639,7 @@ x86_emulate( #endif } else +#endif /* !X86EMUL_NO_SIMD */ { host_and_vcpu_must_have(mmx); get_fpu(X86EMUL_FPU_mmx); @@ -7557,6 +7653,8 @@ x86_emulate( insn_bytes = PFX_BYTES + 1; goto simd_0f_reg_only; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_66(0x0f, 0x78): /* Grp17 */ switch ( modrm_reg & 7 ) { @@ -7654,6 +7752,8 @@ x86_emulate( op_bytes = 8; goto simd_zmm; +#endif /* !X86EMUL_NO_SIMD */ + case X86EMUL_OPC(0x0f, 0x80) ... X86EMUL_OPC(0x0f, 0x8f): /* jcc (near) */ if ( test_cc(b, _regs.eflags) ) jmp_rel((int32_t)src.val); @@ -7664,6 +7764,8 @@ x86_emulate( dst.val = test_cc(b, _regs.eflags); break; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_VEX(0x0f, 0x91): /* kmov{w,q} k,mem */ case X86EMUL_OPC_VEX_66(0x0f, 0x91): /* kmov{b,d} k,mem */ generate_exception_if(ea.type != OP_MEM, EXC_UD); @@ -7812,6 +7914,8 @@ x86_emulate( dst.type = OP_NONE; break; +#endif /* !X86EMUL_NO_SIMD */ + case X86EMUL_OPC(0x0f, 0xa2): /* cpuid */ msr_val = 0; fail_if(ops->cpuid == NULL); @@ -7908,6 +8012,7 @@ x86_emulate( case X86EMUL_OPC(0x0f, 0xae): case X86EMUL_OPC_66(0x0f, 0xae): /* Grp15 */ switch ( modrm_reg & 7 ) { +#ifndef X86EMUL_NO_SIMD case 2: /* ldmxcsr */ generate_exception_if(vex.pfx, EXC_UD); vcpu_must_have(sse); @@ -7926,6 +8031,7 @@ x86_emulate( get_fpu(vex.opcx ? X86EMUL_FPU_ymm : X86EMUL_FPU_xmm); asm volatile ( "stmxcsr %0" : "=m" (dst.val) ); break; +#endif /* X86EMUL_NO_SIMD */ case 5: /* lfence */ fail_if(modrm_mod != 3); @@ -7974,6 +8080,8 @@ x86_emulate( } break; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_VEX(0x0f, 0xae): /* Grp15 */ switch ( modrm_reg & 7 ) { @@ -7988,6 +8096,8 @@ x86_emulate( } goto unrecognized_insn; +#endif /* !X86EMUL_NO_SIMD */ + case X86EMUL_OPC_F3(0x0f, 0xae): /* Grp15 */ fail_if(modrm_mod != 3); generate_exception_if((modrm_reg & 4) || !mode_64bit(), EXC_UD); @@ -8227,6 +8337,8 @@ x86_emulate( } goto simd_0f_imm8_avx; +#ifndef X86EMUL_NO_SIMD + CASE_SIMD_ALL_FP(_EVEX, 0x0f, 0xc2): /* vcmp{p,s}{s,d} $imm8,[xyz]mm/mem,[xyz]mm,k{k} */ generate_exception_if((evex.w != (evex.pfx & VEX_PREFIX_DOUBLE_MASK) || (ea.type != OP_REG && evex.brs && @@ -8253,6 +8365,8 @@ x86_emulate( insn_bytes = EVEX_PFX_BYTES + 3; break; +#endif /* !X86EMUL_NO_SIMD */ + case X86EMUL_OPC(0x0f, 0xc3): /* movnti */ /* Ignore the non-temporal hint for now. */ vcpu_must_have(sse2); @@ -8267,6 +8381,8 @@ x86_emulate( ea.type = OP_MEM; goto simd_0f_int_imm8; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_EVEX_66(0x0f, 0xc4): /* vpinsrw $imm8,r32/m16,xmm,xmm */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x20): /* vpinsrb $imm8,r32/m8,xmm,xmm */ case X86EMUL_OPC_EVEX_66(0x0f3a, 0x22): /* vpinsr{d,q} $imm8,r/m,xmm,xmm */ @@ -8284,6 +8400,8 @@ x86_emulate( state->simd_size = simd_other; goto avx512f_imm8_no_sae; +#endif /* !X86EMUL_NO_SIMD */ + CASE_SIMD_PACKED_INT_VEX(0x0f, 0xc5): /* pextrw $imm8,{,x}mm,reg */ /* vpextrw $imm8,xmm,reg */ generate_exception_if(vex.l, EXC_UD); @@ -8299,6 +8417,8 @@ x86_emulate( insn_bytes = PFX_BYTES + 3; goto simd_0f_to_gpr; +#ifndef X86EMUL_NO_SIMD + CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0xc6): /* vshufp{s,d} $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */ generate_exception_if(evex.w != (evex.pfx & VEX_PREFIX_DOUBLE_MASK), EXC_UD); @@ -8313,6 +8433,8 @@ x86_emulate( avx512_vlen_check(false); goto simd_imm8_zmm; +#endif /* X86EMUL_NO_SIMD */ + case X86EMUL_OPC(0x0f, 0xc7): /* Grp9 */ { union { @@ -8503,6 +8625,8 @@ x86_emulate( } break; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_EVEX_66(0x0f, 0xd2): /* vpsrld xmm/m128,[xyz]mm,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f, 0xd3): /* vpsrlq xmm/m128,[xyz]mm,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f, 0xe2): /* vpsra{d,q} xmm/m128,[xyz]mm,[xyz]mm{k} */ @@ -8524,12 +8648,18 @@ x86_emulate( generate_exception_if(evex.w != (b & 1), EXC_UD); goto avx512f_no_sae; +#endif /* !X86EMUL_NO_SIMD */ +#ifndef X86EMUL_NO_MMX + case X86EMUL_OPC(0x0f, 0xd4): /* paddq mm/m64,mm */ case X86EMUL_OPC(0x0f, 0xf4): /* pmuludq mm/m64,mm */ case X86EMUL_OPC(0x0f, 0xfb): /* psubq mm/m64,mm */ vcpu_must_have(sse2); goto simd_0f_mmx; +#endif /* !X86EMUL_NO_MMX */ +#if !defined(X86EMUL_NO_MMX) && !defined(X86EMUL_NO_SIMD) + case X86EMUL_OPC_F3(0x0f, 0xd6): /* movq2dq mm,xmm */ case X86EMUL_OPC_F2(0x0f, 0xd6): /* movdq2q xmm,mm */ generate_exception_if(ea.type != OP_REG, EXC_UD); @@ -8537,6 +8667,9 @@ x86_emulate( host_and_vcpu_must_have(mmx); goto simd_0f_int; +#endif /* !X86EMUL_NO_MMX && !X86EMUL_NO_SIMD */ +#ifndef X86EMUL_NO_MMX + case X86EMUL_OPC(0x0f, 0xe7): /* movntq mm,m64 */ generate_exception_if(ea.type != OP_MEM, EXC_UD); sfence = true; @@ -8552,6 +8685,9 @@ x86_emulate( vcpu_must_have(mmxext); goto simd_0f_mmx; +#endif /* !X86EMUL_NO_MMX */ +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_EVEX_66(0x0f, 0xda): /* vpminub [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f, 0xde): /* vpmaxub [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */ case X86EMUL_OPC_EVEX_66(0x0f, 0xe4): /* vpmulhuw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */ @@ -8572,6 +8708,8 @@ x86_emulate( op_bytes = 8 << (!!(vex.pfx & VEX_PREFIX_DOUBLE_MASK) + vex.l); goto simd_0f_cvt; +#endif /* !X86EMUL_NO_SIMD */ + CASE_SIMD_PACKED_INT_VEX(0x0f, 0xf7): /* {,v}maskmov{q,dqu} {,x}mm,{,x}mm */ generate_exception_if(ea.type != OP_REG, EXC_UD); if ( vex.opcx != vex_none ) @@ -8675,6 +8813,8 @@ x86_emulate( insn_bytes = PFX_BYTES + 3; break; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_VEX_66(0x0f38, 0x19): /* vbroadcastsd xmm/m64,ymm */ case X86EMUL_OPC_VEX_66(0x0f38, 0x1a): /* vbroadcastf128 m128,ymm */ generate_exception_if(!vex.l, EXC_UD); @@ -9257,6 +9397,8 @@ x86_emulate( ASSERT(!state->simd_size); break; +#endif /* !X86EMUL_NO_SIMD */ + case X86EMUL_OPC_66(0x0f38, 0x82): /* invpcid reg,m128 */ vcpu_must_have(invpcid); generate_exception_if(ea.type != OP_MEM, EXC_UD); @@ -9299,6 +9441,8 @@ x86_emulate( state->simd_size = simd_none; break; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_EVEX_66(0x0f38, 0x83): /* vpmultishiftqb [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */ generate_exception_if(!evex.w, EXC_UD); host_and_vcpu_must_have(avx512_vbmi); @@ -9862,6 +10006,8 @@ x86_emulate( generate_exception_if(evex.brs || evex.opmsk, EXC_UD); goto avx512f_no_sae; +#endif /* !X86EMUL_NO_SIMD */ + case X86EMUL_OPC(0x0f38, 0xf0): /* movbe m,r */ case X86EMUL_OPC(0x0f38, 0xf1): /* movbe r,m */ vcpu_must_have(movbe); @@ -10027,6 +10173,8 @@ x86_emulate( : "0" ((uint32_t)src.val), "rm" (_regs.edx) ); break; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_VEX_66(0x0f3a, 0x00): /* vpermq $imm8,ymm/m256,ymm */ case X86EMUL_OPC_VEX_66(0x0f3a, 0x01): /* vpermpd $imm8,ymm/m256,ymm */ generate_exception_if(!vex.l || !vex.w, EXC_UD); @@ -10087,6 +10235,8 @@ x86_emulate( avx512_vlen_check(b & 2); goto simd_imm8_zmm; +#endif /* X86EMUL_NO_SIMD */ + CASE_SIMD_PACKED_INT(0x0f3a, 0x0f): /* palignr $imm8,{,x}mm/mem,{,x}mm */ host_and_vcpu_must_have(ssse3); if ( vex.pfx ) @@ -10114,6 +10264,8 @@ x86_emulate( insn_bytes = PFX_BYTES + 4; break; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_EVEX_66(0x0f3a, 0x42): /* vdbpsadbw $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */ generate_exception_if(evex.w, EXC_UD); /* fall through */ @@ -10612,6 +10764,8 @@ x86_emulate( generate_exception_if(vex.l, EXC_UD); goto simd_0f_imm8_avx; +#endif /* X86EMUL_NO_SIMD */ + case X86EMUL_OPC_VEX_F2(0x0f3a, 0xf0): /* rorx imm,r/m,r */ vcpu_must_have(bmi2); generate_exception_if(vex.l || vex.reg != 0xf, EXC_UD); @@ -10626,6 +10780,8 @@ x86_emulate( asm ( "rorl %b1,%k0" : "=g" (dst.val) : "c" (imm1), "0" (src.val) ); break; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_XOP(08, 0x85): /* vpmacssww xmm,xmm/m128,xmm,xmm */ case X86EMUL_OPC_XOP(08, 0x86): /* vpmacsswd xmm,xmm/m128,xmm,xmm */ case X86EMUL_OPC_XOP(08, 0x87): /* vpmacssdql xmm,xmm/m128,xmm,xmm */ @@ -10661,6 +10817,8 @@ x86_emulate( host_and_vcpu_must_have(xop); goto simd_0f_imm8_ymm; +#endif /* X86EMUL_NO_SIMD */ + case X86EMUL_OPC_XOP(09, 0x01): /* XOP Grp1 */ switch ( modrm_reg & 7 ) { @@ -10720,6 +10878,8 @@ x86_emulate( } goto unrecognized_insn; +#ifndef X86EMUL_NO_SIMD + case X86EMUL_OPC_XOP(09, 0x82): /* vfrczss xmm/m128,xmm */ case X86EMUL_OPC_XOP(09, 0x83): /* vfrczsd xmm/m128,xmm */ generate_exception_if(vex.l, EXC_UD); @@ -10775,6 +10935,8 @@ x86_emulate( host_and_vcpu_must_have(xop); goto simd_0f_ymm; +#endif /* X86EMUL_NO_SIMD */ + case X86EMUL_OPC_XOP(0a, 0x10): /* bextr imm,r/m,r */ { uint8_t *buf = get_stub(stub); From patchwork Thu May 14 09:08:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11548345 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9BDDF912 for ; Thu, 14 May 2020 09:09:44 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 74DCB206F1 for ; Thu, 14 May 2020 09:09:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 74DCB206F1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9qi-00019F-Fk; Thu, 14 May 2020 09:08:16 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9qh-000194-6j for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:08:15 +0000 X-Inumbo-ID: 70af663a-95c2-11ea-ae69-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 70af663a-95c2-11ea-ae69-bc764e2007e4; Thu, 14 May 2020 09:08:14 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id C325CAF8D; Thu, 14 May 2020 09:08:15 +0000 (UTC) Subject: [PATCH v9 3/9] x86emul: support MOVDIR{I,64B} insns From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: Message-ID: <99a8266c-f4f2-76fd-0092-fe10f1148eb4@suse.com> Date: Thu, 14 May 2020 11:08:11 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Introduce a new blk() hook, paralleling the rmw() one in a certain way, but being intended for larger data sizes, and hence its HVM intermediate handling function doesn't fall back to splitting the operation if the requested virtual address can't be mapped. Note that SDM revision 071 doesn't specify exception behavior for ModRM.mod == 0b11; assuming #UD here. Signed-off-by: Jan Beulich Reviewed-by: Paul Durrant --- v9: Fold in "x86/HVM: make hvmemul_blk() capable of handling r/o operations". Also adjust x86_insn_is_mem_write(). v7: Add blk_NONE. Move harness'es setting of .blk. Correct indentation. Re-base. v6: Fold MOVDIRI and MOVDIR64B changes again. Use blk() for both. All tags dropped. v5: Introduce/use ->blk() hook. Correct asm() operands. v4: Split MOVDIRI and MOVDIR64B and move this one ahead. Re-base. v3: Update description. --- (SDE: -tnt) --- a/tools/tests/x86_emulator/test_x86_emulator.c +++ b/tools/tests/x86_emulator/test_x86_emulator.c @@ -652,6 +652,18 @@ static int cmpxchg( return X86EMUL_OKAY; } +static int blk( + enum x86_segment seg, + unsigned long offset, + void *p_data, + unsigned int bytes, + uint32_t *eflags, + struct x86_emulate_state *state, + struct x86_emulate_ctxt *ctxt) +{ + return x86_emul_blk((void *)offset, p_data, bytes, eflags, state, ctxt); +} + static int read_segment( enum x86_segment seg, struct segment_register *reg, @@ -721,6 +733,7 @@ static struct x86_emulate_ops emulops = .insn_fetch = fetch, .write = write, .cmpxchg = cmpxchg, + .blk = blk, .read_segment = read_segment, .cpuid = emul_test_cpuid, .read_cr = emul_test_read_cr, @@ -2339,6 +2352,50 @@ int main(int argc, char **argv) goto fail; printf("okay\n"); + printf("%-40s", "Testing movdiri %edx,(%ecx)..."); + if ( stack_exec && cpu_has_movdiri ) + { + instr[0] = 0x0f; instr[1] = 0x38; instr[2] = 0xf9; instr[3] = 0x11; + + regs.eip = (unsigned long)&instr[0]; + regs.ecx = (unsigned long)memset(res, -1, 16); + regs.edx = 0x44332211; + + rc = x86_emulate(&ctxt, &emulops); + if ( (rc != X86EMUL_OKAY) || + (regs.eip != (unsigned long)&instr[4]) || + res[0] != 0x44332211 || ~res[1] ) + goto fail; + printf("okay\n"); + } + else + printf("skipped\n"); + + printf("%-40s", "Testing movdir64b 144(%edx),%ecx..."); + if ( stack_exec && cpu_has_movdir64b ) + { + instr[0] = 0x66; instr[1] = 0x0f; instr[2] = 0x38; instr[3] = 0xf8; + instr[4] = 0x8a; instr[5] = 0x90; instr[8] = instr[7] = instr[6] = 0; + + regs.eip = (unsigned long)&instr[0]; + for ( i = 0; i < 64; ++i ) + res[i] = i - 20; + regs.edx = (unsigned long)res; + regs.ecx = (unsigned long)(res + 16); + + rc = x86_emulate(&ctxt, &emulops); + if ( (rc != X86EMUL_OKAY) || + (regs.eip != (unsigned long)&instr[9]) || + res[15] != -5 || res[32] != 12 ) + goto fail; + for ( i = 16; i < 32; ++i ) + if ( res[i] != i ) + goto fail; + printf("okay\n"); + } + else + printf("skipped\n"); + printf("%-40s", "Testing movq %mm3,(%ecx)..."); if ( stack_exec && cpu_has_mmx ) { --- a/tools/tests/x86_emulator/x86-emulate.h +++ b/tools/tests/x86_emulator/x86-emulate.h @@ -154,6 +154,8 @@ static inline bool xcr0_mask(uint64_t ma #define cpu_has_avx512_vnni (cp.feat.avx512_vnni && xcr0_mask(0xe6)) #define cpu_has_avx512_bitalg (cp.feat.avx512_bitalg && xcr0_mask(0xe6)) #define cpu_has_avx512_vpopcntdq (cp.feat.avx512_vpopcntdq && xcr0_mask(0xe6)) +#define cpu_has_movdiri cp.feat.movdiri +#define cpu_has_movdir64b cp.feat.movdir64b #define cpu_has_avx512_4vnniw (cp.feat.avx512_4vnniw && xcr0_mask(0xe6)) #define cpu_has_avx512_4fmaps (cp.feat.avx512_4fmaps && xcr0_mask(0xe6)) #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6)) --- a/xen/arch/x86/arch.mk +++ b/xen/arch/x86/arch.mk @@ -47,6 +47,7 @@ $(call as-option-add,CFLAGS,CC,"rdseed % $(call as-option-add,CFLAGS,CC,"clwb (%rax)",-DHAVE_AS_CLWB) $(call as-option-add,CFLAGS,CC,".equ \"x\"$$(comma)1",-DHAVE_AS_QUOTED_SYM) $(call as-option-add,CFLAGS,CC,"invpcid (%rax)$$(comma)%rax",-DHAVE_AS_INVPCID) +$(call as-option-add,CFLAGS,CC,"movdiri %rax$$(comma)(%rax)",-DHAVE_AS_MOVDIR) # GAS's idea of true is -1. Clang's idea is 1 $(call as-option-add,CFLAGS,CC,\ --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -1441,6 +1441,47 @@ static int hvmemul_rmw( return rc; } +static int hvmemul_blk( + enum x86_segment seg, + unsigned long offset, + void *p_data, + unsigned int bytes, + uint32_t *eflags, + struct x86_emulate_state *state, + struct x86_emulate_ctxt *ctxt) +{ + struct hvm_emulate_ctxt *hvmemul_ctxt = + container_of(ctxt, struct hvm_emulate_ctxt, ctxt); + unsigned long addr; + uint32_t pfec = PFEC_page_present; + int rc; + void *mapping = NULL; + + rc = hvmemul_virtual_to_linear( + seg, offset, bytes, NULL, hvm_access_write, hvmemul_ctxt, &addr); + if ( rc != X86EMUL_OKAY || !bytes ) + return rc; + + if ( x86_insn_is_mem_write(state, ctxt) ) + pfec |= PFEC_write_access; + + if ( is_x86_system_segment(seg) ) + pfec |= PFEC_implicit; + else if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 ) + pfec |= PFEC_user_mode; + + mapping = hvmemul_map_linear_addr(addr, bytes, pfec, hvmemul_ctxt); + if ( IS_ERR(mapping) ) + return ~PTR_ERR(mapping); + if ( !mapping ) + return X86EMUL_UNHANDLEABLE; + + rc = x86_emul_blk(mapping, p_data, bytes, eflags, state, ctxt); + hvmemul_unmap_linear_addr(mapping, addr, bytes, hvmemul_ctxt); + + return rc; +} + static int hvmemul_write_discard( enum x86_segment seg, unsigned long offset, @@ -2512,6 +2553,7 @@ static const struct x86_emulate_ops hvm_ .write = hvmemul_write, .rmw = hvmemul_rmw, .cmpxchg = hvmemul_cmpxchg, + .blk = hvmemul_blk, .validate = hvmemul_validate, .rep_ins = hvmemul_rep_ins, .rep_outs = hvmemul_rep_outs, --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -548,6 +548,8 @@ static const struct ext0f38_table { [0xf1] = { .to_mem = 1, .two_op = 1 }, [0xf2 ... 0xf3] = {}, [0xf5 ... 0xf7] = {}, + [0xf8] = { .simd_size = simd_other }, + [0xf9] = { .to_mem = 1, .two_op = 1 /* Mov */ }, }; /* Shift values between src and dst sizes of pmov{s,z}x{b,w,d}{w,d,q}. */ @@ -851,6 +853,10 @@ struct x86_emulate_state { rmw_xchg, rmw_xor, } rmw; + enum { + blk_NONE, + blk_movdir, + } blk; uint8_t modrm, modrm_mod, modrm_reg, modrm_rm; uint8_t sib_index, sib_scale; uint8_t rex_prefix; @@ -1914,6 +1920,8 @@ amd_like(const struct x86_emulate_ctxt * #define vcpu_has_avx512_bitalg() (ctxt->cpuid->feat.avx512_bitalg) #define vcpu_has_avx512_vpopcntdq() (ctxt->cpuid->feat.avx512_vpopcntdq) #define vcpu_has_rdpid() (ctxt->cpuid->feat.rdpid) +#define vcpu_has_movdiri() (ctxt->cpuid->feat.movdiri) +#define vcpu_has_movdir64b() (ctxt->cpuid->feat.movdir64b) #define vcpu_has_avx512_4vnniw() (ctxt->cpuid->feat.avx512_4vnniw) #define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps) #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16) @@ -2722,10 +2730,12 @@ x86_decode_0f38( { case 0x00 ... 0xef: case 0xf2 ... 0xf5: - case 0xf7 ... 0xff: + case 0xf7 ... 0xf8: + case 0xfa ... 0xff: op_bytes = 0; /* fall through */ case 0xf6: /* adcx / adox */ + case 0xf9: /* movdiri */ ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK); break; @@ -10173,6 +10183,34 @@ x86_emulate( : "0" ((uint32_t)src.val), "rm" (_regs.edx) ); break; + case X86EMUL_OPC_66(0x0f38, 0xf8): /* movdir64b r,m512 */ + host_and_vcpu_must_have(movdir64b); + generate_exception_if(ea.type != OP_MEM, EXC_UD); + src.val = truncate_ea(*dst.reg); + generate_exception_if(!is_aligned(x86_seg_es, src.val, 64, ctxt, ops), + EXC_GP, 0); + fail_if(!ops->blk); + state->blk = blk_movdir; + BUILD_BUG_ON(sizeof(*mmvalp) < 64); + if ( (rc = ops->read(ea.mem.seg, ea.mem.off, mmvalp, 64, + ctxt)) != X86EMUL_OKAY || + (rc = ops->blk(x86_seg_es, src.val, mmvalp, 64, &_regs.eflags, + state, ctxt)) != X86EMUL_OKAY ) + goto done; + state->simd_size = simd_none; + break; + + case X86EMUL_OPC(0x0f38, 0xf9): /* movdiri mem,r */ + host_and_vcpu_must_have(movdiri); + generate_exception_if(dst.type != OP_MEM, EXC_UD); + fail_if(!ops->blk); + state->blk = blk_movdir; + if ( (rc = ops->blk(dst.mem.seg, dst.mem.off, &src.val, op_bytes, + &_regs.eflags, state, ctxt)) != X86EMUL_OKAY ) + goto done; + dst.type = OP_NONE; + break; + #ifndef X86EMUL_NO_SIMD case X86EMUL_OPC_VEX_66(0x0f3a, 0x00): /* vpermq $imm8,ymm/m256,ymm */ @@ -11432,6 +11470,77 @@ int x86_emul_rmw( return X86EMUL_OKAY; } +int x86_emul_blk( + void *ptr, + void *data, + unsigned int bytes, + uint32_t *eflags, + struct x86_emulate_state *state, + struct x86_emulate_ctxt *ctxt) +{ + switch ( state->blk ) + { + /* + * Throughout this switch(), memory clobbers are used to compensate + * that other operands may not properly express the (full) memory + * ranges covered. + */ + case blk_movdir: + switch ( bytes ) + { +#ifdef __x86_64__ + case sizeof(uint32_t): +# ifdef HAVE_AS_MOVDIR + asm ( "movdiri %0, (%1)" + :: "r" (*(uint32_t *)data), "r" (ptr) : "memory" ); +# else + /* movdiri %esi, (%rdi) */ + asm ( ".byte 0x0f, 0x38, 0xf9, 0x37" + :: "S" (*(uint32_t *)data), "D" (ptr) : "memory" ); +# endif + break; +#endif + + case sizeof(unsigned long): +#ifdef HAVE_AS_MOVDIR + asm ( "movdiri %0, (%1)" + :: "r" (*(unsigned long *)data), "r" (ptr) : "memory" ); +#else + /* movdiri %rsi, (%rdi) */ + asm ( ".byte 0x48, 0x0f, 0x38, 0xf9, 0x37" + :: "S" (*(unsigned long *)data), "D" (ptr) : "memory" ); +#endif + break; + + case 64: + if ( ((unsigned long)ptr & 0x3f) ) + { + ASSERT_UNREACHABLE(); + return X86EMUL_UNHANDLEABLE; + } +#ifdef HAVE_AS_MOVDIR + asm ( "movdir64b (%0), %1" :: "r" (data), "r" (ptr) : "memory" ); +#else + /* movdir64b (%rsi), %rdi */ + asm ( ".byte 0x66, 0x0f, 0x38, 0xf8, 0x3e" + :: "S" (data), "D" (ptr) : "memory" ); +#endif + break; + + default: + ASSERT_UNREACHABLE(); + return X86EMUL_UNHANDLEABLE; + } + break; + + default: + ASSERT_UNREACHABLE(); + return X86EMUL_UNHANDLEABLE; + } + + return X86EMUL_OKAY; +} + static void __init __maybe_unused build_assertions(void) { /* Check the values against SReg3 encoding in opcode/ModRM bytes. */ @@ -11689,6 +11798,11 @@ x86_insn_is_mem_write(const struct x86_e break; default: + switch ( ctxt->opcode ) + { + case X86EMUL_OPC_66(0x0f38, 0xf8): /* MOVDIR64B */ + return true; + } return false; } --- a/xen/arch/x86/x86_emulate/x86_emulate.h +++ b/xen/arch/x86/x86_emulate/x86_emulate.h @@ -310,6 +310,22 @@ struct x86_emulate_ops struct x86_emulate_ctxt *ctxt); /* + * blk: Emulate a large (block) memory access. + * @p_data: [IN/OUT] (optional) Pointer to source/destination buffer. + * @eflags: [IN/OUT] Pointer to EFLAGS to be updated according to + * instruction effects. + * @state: [IN/OUT] Pointer to (opaque) emulator state. + */ + int (*blk)( + enum x86_segment seg, + unsigned long offset, + void *p_data, + unsigned int bytes, + uint32_t *eflags, + struct x86_emulate_state *state, + struct x86_emulate_ctxt *ctxt); + + /* * validate: Post-decode, pre-emulate hook to allow caller controlled * filtering. */ @@ -793,6 +809,14 @@ x86_emul_rmw( unsigned int bytes, uint32_t *eflags, struct x86_emulate_state *state, + struct x86_emulate_ctxt *ctxt); +int +x86_emul_blk( + void *ptr, + void *data, + unsigned int bytes, + uint32_t *eflags, + struct x86_emulate_state *state, struct x86_emulate_ctxt *ctxt); static inline void x86_emul_hw_exception( --- a/xen/include/asm-x86/cpufeature.h +++ b/xen/include/asm-x86/cpufeature.h @@ -118,6 +118,8 @@ #define cpu_has_avx512_bitalg boot_cpu_has(X86_FEATURE_AVX512_BITALG) #define cpu_has_avx512_vpopcntdq boot_cpu_has(X86_FEATURE_AVX512_VPOPCNTDQ) #define cpu_has_rdpid boot_cpu_has(X86_FEATURE_RDPID) +#define cpu_has_movdiri boot_cpu_has(X86_FEATURE_MOVDIRI) +#define cpu_has_movdir64b boot_cpu_has(X86_FEATURE_MOVDIR64B) /* CPUID level 0x80000007.edx */ #define cpu_has_itsc boot_cpu_has(X86_FEATURE_ITSC) --- a/xen/include/public/arch-x86/cpufeatureset.h +++ b/xen/include/public/arch-x86/cpufeatureset.h @@ -240,6 +240,8 @@ XEN_CPUFEATURE(AVX512_BITALG, 6*32+12) / XEN_CPUFEATURE(AVX512_VPOPCNTDQ, 6*32+14) /*A POPCNT for vectors of DW/QW */ XEN_CPUFEATURE(RDPID, 6*32+22) /*A RDPID instruction */ XEN_CPUFEATURE(CLDEMOTE, 6*32+25) /*A CLDEMOTE instruction */ +XEN_CPUFEATURE(MOVDIRI, 6*32+27) /*A MOVDIRI instruction */ +XEN_CPUFEATURE(MOVDIR64B, 6*32+28) /*A MOVDIR64B instruction */ /* AMD-defined CPU features, CPUID level 0x80000007.edx, word 7 */ XEN_CPUFEATURE(ITSC, 7*32+ 8) /* Invariant TSC */ From patchwork Thu May 14 09:08:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11548347 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 66E77618 for ; Thu, 14 May 2020 09:09:47 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 488A8206F1 for ; Thu, 14 May 2020 09:09:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 488A8206F1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9rF-0001HI-US; Thu, 14 May 2020 09:08:49 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9rE-0001H3-Ld for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:08:48 +0000 X-Inumbo-ID: 84d2137e-95c2-11ea-ae69-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 84d2137e-95c2-11ea-ae69-bc764e2007e4; Thu, 14 May 2020 09:08:47 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 598D5AF69; Thu, 14 May 2020 09:08:49 +0000 (UTC) Subject: [PATCH v9 4/9] x86emul: support ENQCMD insns From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: Message-ID: Date: Thu, 14 May 2020 11:08:44 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Note that the ISA extensions document revision 038 doesn't specify exception behavior for ModRM.mod == 0b11; assuming #UD here. No tests are being added to the harness - this would be quite hard, we can't just issue the insns against RAM. Their similarity with MOVDIR64B should have the test case there be god enough to cover any fundamental flaws. Signed-off-by: Jan Beulich Acked-by: Andrew Cooper --- TBD: This doesn't (can't) consult PASID translation tables yet, as we have no VMX code for this so far. I guess for this we will want to replace the direct ->read_msr(MSR_PASID, ...) with a new ->read_pasid() hook. --- v9: Consistently use named asm() operands. Also adjust x86_insn_is_mem_write(). A -> a in public header. Move asm-x86/msr-index.h addition and drop _IA32 from their names. Introduce _AC() into the emulator harness as a result. v7: Re-base. v6: Re-base. v5: New. --- a/tools/tests/x86_emulator/x86-emulate.h +++ b/tools/tests/x86_emulator/x86-emulate.h @@ -59,6 +59,9 @@ (type *)((char *)mptr__ - offsetof(type, member)); \ }) +#define AC_(n,t) (n##t) +#define _AC(n,t) AC_(n,t) + #define hweight32 __builtin_popcount #define hweight64 __builtin_popcountll --- a/xen/arch/x86/arch.mk +++ b/xen/arch/x86/arch.mk @@ -48,6 +48,7 @@ $(call as-option-add,CFLAGS,CC,"clwb (%r $(call as-option-add,CFLAGS,CC,".equ \"x\"$$(comma)1",-DHAVE_AS_QUOTED_SYM) $(call as-option-add,CFLAGS,CC,"invpcid (%rax)$$(comma)%rax",-DHAVE_AS_INVPCID) $(call as-option-add,CFLAGS,CC,"movdiri %rax$$(comma)(%rax)",-DHAVE_AS_MOVDIR) +$(call as-option-add,CFLAGS,CC,"enqcmd (%rax)$$(comma)%rax",-DHAVE_AS_ENQCMD) # GAS's idea of true is -1. Clang's idea is 1 $(call as-option-add,CFLAGS,CC,\ --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -855,6 +855,7 @@ struct x86_emulate_state { } rmw; enum { blk_NONE, + blk_enqcmd, blk_movdir, } blk; uint8_t modrm, modrm_mod, modrm_reg, modrm_rm; @@ -901,6 +902,7 @@ typedef union { uint64_t __attribute__ ((aligned(16))) xmm[2]; uint64_t __attribute__ ((aligned(32))) ymm[4]; uint64_t __attribute__ ((aligned(64))) zmm[8]; + uint32_t data32[16]; } mmval_t; /* @@ -1922,6 +1924,7 @@ amd_like(const struct x86_emulate_ctxt * #define vcpu_has_rdpid() (ctxt->cpuid->feat.rdpid) #define vcpu_has_movdiri() (ctxt->cpuid->feat.movdiri) #define vcpu_has_movdir64b() (ctxt->cpuid->feat.movdir64b) +#define vcpu_has_enqcmd() (ctxt->cpuid->feat.enqcmd) #define vcpu_has_avx512_4vnniw() (ctxt->cpuid->feat.avx512_4vnniw) #define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps) #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16) @@ -10200,6 +10203,36 @@ x86_emulate( state->simd_size = simd_none; break; + case X86EMUL_OPC_F2(0x0f38, 0xf8): /* enqcmd r,m512 */ + case X86EMUL_OPC_F3(0x0f38, 0xf8): /* enqcmds r,m512 */ + host_and_vcpu_must_have(enqcmd); + generate_exception_if(ea.type != OP_MEM, EXC_UD); + generate_exception_if(vex.pfx != vex_f2 && !mode_ring0(), EXC_GP, 0); + src.val = truncate_ea(*dst.reg); + generate_exception_if(!is_aligned(x86_seg_es, src.val, 64, ctxt, ops), + EXC_GP, 0); + fail_if(!ops->blk); + BUILD_BUG_ON(sizeof(*mmvalp) < 64); + if ( (rc = ops->read(ea.mem.seg, ea.mem.off, mmvalp, 64, + ctxt)) != X86EMUL_OKAY ) + goto done; + if ( vex.pfx == vex_f2 ) /* enqcmd */ + { + fail_if(!ops->read_msr); + if ( (rc = ops->read_msr(MSR_PASID, &msr_val, + ctxt)) != X86EMUL_OKAY ) + goto done; + generate_exception_if(!(msr_val & PASID_VALID), EXC_GP, 0); + mmvalp->data32[0] = MASK_EXTR(msr_val, PASID_PASID_MASK); + } + mmvalp->data32[0] &= ~0x7ff00000; + state->blk = blk_enqcmd; + if ( (rc = ops->blk(x86_seg_es, src.val, mmvalp, 64, &_regs.eflags, + state, ctxt)) != X86EMUL_OKAY ) + goto done; + state->simd_size = simd_none; + break; + case X86EMUL_OPC(0x0f38, 0xf9): /* movdiri mem,r */ host_and_vcpu_must_have(movdiri); generate_exception_if(dst.type != OP_MEM, EXC_UD); @@ -11480,11 +11513,36 @@ int x86_emul_blk( { switch ( state->blk ) { + bool zf; + /* * Throughout this switch(), memory clobbers are used to compensate * that other operands may not properly express the (full) memory * ranges covered. */ + case blk_enqcmd: + ASSERT(bytes == 64); + if ( ((unsigned long)ptr & 0x3f) ) + { + ASSERT_UNREACHABLE(); + return X86EMUL_UNHANDLEABLE; + } + *eflags &= ~EFLAGS_MASK; +#ifdef HAVE_AS_ENQCMD + asm ( "enqcmds (%[src]), %[dst]" ASM_FLAG_OUT(, "; setz %[zf]") + : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf) + : [src] "r" (data), [dst] "r" (ptr) : "memory" ); +#else + /* enqcmds (%rsi), %rdi */ + asm ( ".byte 0xf3, 0x0f, 0x38, 0xf8, 0x3e" + ASM_FLAG_OUT(, "; setz %[zf]") + : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf) + : "S" (data), "D" (ptr) : "memory" ); +#endif + if ( zf ) + *eflags |= X86_EFLAGS_ZF; + break; + case blk_movdir: switch ( bytes ) { @@ -11801,6 +11859,8 @@ x86_insn_is_mem_write(const struct x86_e switch ( ctxt->opcode ) { case X86EMUL_OPC_66(0x0f38, 0xf8): /* MOVDIR64B */ + case X86EMUL_OPC_F2(0x0f38, 0xf8): /* ENQCMD */ + case X86EMUL_OPC_F3(0x0f38, 0xf8): /* ENQCMDS */ return true; } return false; --- a/xen/include/asm-x86/cpufeature.h +++ b/xen/include/asm-x86/cpufeature.h @@ -120,6 +120,7 @@ #define cpu_has_rdpid boot_cpu_has(X86_FEATURE_RDPID) #define cpu_has_movdiri boot_cpu_has(X86_FEATURE_MOVDIRI) #define cpu_has_movdir64b boot_cpu_has(X86_FEATURE_MOVDIR64B) +#define cpu_has_enqcmd boot_cpu_has(X86_FEATURE_ENQCMD) /* CPUID level 0x80000007.edx */ #define cpu_has_itsc boot_cpu_has(X86_FEATURE_ITSC) --- a/xen/include/asm-x86/msr-index.h +++ b/xen/include/asm-x86/msr-index.h @@ -74,6 +74,10 @@ #define MSR_PL3_SSP 0x000006a7 #define MSR_INTERRUPT_SSP_TABLE 0x000006a8 +#define MSR_PASID 0x00000d93 +#define PASID_PASID_MASK 0x000fffff +#define PASID_VALID (_AC(1, ULL) << 31) + /* * Legacy MSR constants in need of cleanup. No new MSRs below this comment. */ --- a/xen/include/public/arch-x86/cpufeatureset.h +++ b/xen/include/public/arch-x86/cpufeatureset.h @@ -242,6 +242,7 @@ XEN_CPUFEATURE(RDPID, 6*32+22) / XEN_CPUFEATURE(CLDEMOTE, 6*32+25) /*A CLDEMOTE instruction */ XEN_CPUFEATURE(MOVDIRI, 6*32+27) /*A MOVDIRI instruction */ XEN_CPUFEATURE(MOVDIR64B, 6*32+28) /*A MOVDIR64B instruction */ +XEN_CPUFEATURE(ENQCMD, 6*32+29) /* ENQCMD{,S} instructions */ /* AMD-defined CPU features, CPUID level 0x80000007.edx, word 7 */ XEN_CPUFEATURE(ITSC, 7*32+ 8) /* Invariant TSC */ From patchwork Thu May 14 09:09:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11548349 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2C897618 for ; Thu, 14 May 2020 09:10:06 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 12E0C206F1 for ; Thu, 14 May 2020 09:10:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 12E0C206F1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9rj-0001Mx-7d; Thu, 14 May 2020 09:09:19 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9rh-0001Ma-BX for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:09:17 +0000 X-Inumbo-ID: 96095896-95c2-11ea-b07b-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 96095896-95c2-11ea-b07b-bc764e2007e4; Thu, 14 May 2020 09:09:16 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 736FEAF4D; Thu, 14 May 2020 09:09:18 +0000 (UTC) Subject: [PATCH v9 5/9] x86emul: support SERIALIZE From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: Message-ID: <20f4c371-b60a-b86c-b0c0-b1ab64ca4349@suse.com> Date: Thu, 14 May 2020 11:09:13 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" ... enabling its use by all guest kinds at the same time. Signed-off-by: Jan Beulich Acked-by: Andrew Cooper --- v9: A -> a in public header. v7: Re-base. v6: New. --- a/tools/libxl/libxl_cpuid.c +++ b/tools/libxl/libxl_cpuid.c @@ -214,6 +214,7 @@ int libxl_cpuid_parse_config(libxl_cpuid {"avx512-4vnniw",0x00000007, 0, CPUID_REG_EDX, 2, 1}, {"avx512-4fmaps",0x00000007, 0, CPUID_REG_EDX, 3, 1}, {"md-clear", 0x00000007, 0, CPUID_REG_EDX, 10, 1}, + {"serialize", 0x00000007, 0, CPUID_REG_EDX, 14, 1}, {"cet-ibt", 0x00000007, 0, CPUID_REG_EDX, 20, 1}, {"ibrsb", 0x00000007, 0, CPUID_REG_EDX, 26, 1}, {"stibp", 0x00000007, 0, CPUID_REG_EDX, 27, 1}, --- a/tools/misc/xen-cpuid.c +++ b/tools/misc/xen-cpuid.c @@ -161,6 +161,7 @@ static const char *const str_7d0[32] = [10] = "md-clear", /* 12 */ [13] = "tsx-force-abort", + [14] = "serialize", [18] = "pconfig", [20] = "cet-ibt", --- a/tools/tests/x86_emulator/x86-emulate.h +++ b/tools/tests/x86_emulator/x86-emulate.h @@ -161,6 +161,7 @@ static inline bool xcr0_mask(uint64_t ma #define cpu_has_movdir64b cp.feat.movdir64b #define cpu_has_avx512_4vnniw (cp.feat.avx512_4vnniw && xcr0_mask(0xe6)) #define cpu_has_avx512_4fmaps (cp.feat.avx512_4fmaps && xcr0_mask(0xe6)) +#define cpu_has_serialize cp.feat.serialize #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6)) #define cpu_has_xgetbv1 (cpu_has_xsave && cp.xstate.xgetbv1) --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -1927,6 +1927,7 @@ amd_like(const struct x86_emulate_ctxt * #define vcpu_has_enqcmd() (ctxt->cpuid->feat.enqcmd) #define vcpu_has_avx512_4vnniw() (ctxt->cpuid->feat.avx512_4vnniw) #define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps) +#define vcpu_has_serialize() (ctxt->cpuid->feat.serialize) #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16) #define vcpu_must_have(feat) \ @@ -5660,6 +5661,18 @@ x86_emulate( goto done; break; + case 0xe8: + switch ( vex.pfx ) + { + case vex_none: /* serialize */ + host_and_vcpu_must_have(serialize); + asm volatile ( ".byte 0x0f, 0x01, 0xe8" ); + break; + default: + goto unimplemented_insn; + } + break; + case 0xf8: /* swapgs */ generate_exception_if(!mode_64bit(), EXC_UD); generate_exception_if(!mode_ring0(), EXC_GP, 0); --- a/xen/include/asm-x86/cpufeature.h +++ b/xen/include/asm-x86/cpufeature.h @@ -129,6 +129,7 @@ #define cpu_has_avx512_4vnniw boot_cpu_has(X86_FEATURE_AVX512_4VNNIW) #define cpu_has_avx512_4fmaps boot_cpu_has(X86_FEATURE_AVX512_4FMAPS) #define cpu_has_tsx_force_abort boot_cpu_has(X86_FEATURE_TSX_FORCE_ABORT) +#define cpu_has_serialize boot_cpu_has(X86_FEATURE_SERIALIZE) /* CPUID level 0x00000007:1.eax */ #define cpu_has_avx512_bf16 boot_cpu_has(X86_FEATURE_AVX512_BF16) --- a/xen/include/public/arch-x86/cpufeatureset.h +++ b/xen/include/public/arch-x86/cpufeatureset.h @@ -260,6 +260,7 @@ XEN_CPUFEATURE(AVX512_4VNNIW, 9*32+ 2) / XEN_CPUFEATURE(AVX512_4FMAPS, 9*32+ 3) /*A AVX512 Multiply Accumulation Single Precision */ XEN_CPUFEATURE(MD_CLEAR, 9*32+10) /*A VERW clears microarchitectural buffers */ XEN_CPUFEATURE(TSX_FORCE_ABORT, 9*32+13) /* MSR_TSX_FORCE_ABORT.RTM_ABORT */ +XEN_CPUFEATURE(SERIALIZE, 9*32+14) /*a SERIALIZE insn */ XEN_CPUFEATURE(CET_IBT, 9*32+20) /* CET - Indirect Branch Tracking */ XEN_CPUFEATURE(IBRSB, 9*32+26) /*A IBRS and IBPB support (used by Intel) */ XEN_CPUFEATURE(STIBP, 9*32+27) /*A STIBP */ From patchwork Thu May 14 09:09:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11548353 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 71A23618 for ; Thu, 14 May 2020 09:10:33 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5838220675 for ; Thu, 14 May 2020 09:10:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5838220675 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9s7-0001RW-Hc; Thu, 14 May 2020 09:09:43 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9s6-0001RI-W7 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:09:43 +0000 X-Inumbo-ID: a553c413-95c2-11ea-a463-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id a553c413-95c2-11ea-a463-12813bfff9fa; Thu, 14 May 2020 09:09:42 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 2332DAF68; Thu, 14 May 2020 09:09:44 +0000 (UTC) Subject: [PATCH v9 6/9] x86emul: support X{SUS,RES}LDTRK From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: Message-ID: <564bb51e-aeaf-48de-6326-a7c03e1fe738@suse.com> Date: Thu, 14 May 2020 11:09:39 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" There's nothing to be done by the emulator, as we unconditionally abort any XBEGIN. Signed-off-by: Jan Beulich Acked-by: Andrew Cooper --- v9: A -> a in public header. Add comments. v6: New. --- a/tools/libxl/libxl_cpuid.c +++ b/tools/libxl/libxl_cpuid.c @@ -208,6 +208,7 @@ int libxl_cpuid_parse_config(libxl_cpuid {"avx512-vnni", 0x00000007, 0, CPUID_REG_ECX, 11, 1}, {"avx512-bitalg",0x00000007, 0, CPUID_REG_ECX, 12, 1}, {"avx512-vpopcntdq",0x00000007,0,CPUID_REG_ECX, 14, 1}, + {"tsxldtrk", 0x00000007, 0, CPUID_REG_ECX, 16, 1}, {"rdpid", 0x00000007, 0, CPUID_REG_ECX, 22, 1}, {"cldemote", 0x00000007, 0, CPUID_REG_ECX, 25, 1}, --- a/tools/misc/xen-cpuid.c +++ b/tools/misc/xen-cpuid.c @@ -128,6 +128,7 @@ static const char *const str_7c0[32] = [10] = "vpclmulqdq", [11] = "avx512_vnni", [12] = "avx512_bitalg", [14] = "avx512_vpopcntdq", + [16] = "tsxldtrk", [22] = "rdpid", /* 24 */ [25] = "cldemote", --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -1921,6 +1921,7 @@ amd_like(const struct x86_emulate_ctxt * #define vcpu_has_avx512_vnni() (ctxt->cpuid->feat.avx512_vnni) #define vcpu_has_avx512_bitalg() (ctxt->cpuid->feat.avx512_bitalg) #define vcpu_has_avx512_vpopcntdq() (ctxt->cpuid->feat.avx512_vpopcntdq) +#define vcpu_has_tsxldtrk() (ctxt->cpuid->feat.tsxldtrk) #define vcpu_has_rdpid() (ctxt->cpuid->feat.rdpid) #define vcpu_has_movdiri() (ctxt->cpuid->feat.movdiri) #define vcpu_has_movdir64b() (ctxt->cpuid->feat.movdir64b) @@ -5668,6 +5669,28 @@ x86_emulate( host_and_vcpu_must_have(serialize); asm volatile ( ".byte 0x0f, 0x01, 0xe8" ); break; + case vex_f2: /* xsusldtrk */ + vcpu_must_have(tsxldtrk); + /* + * We're never in a transactional region when coming here + * - nothing else to do. + */ + break; + default: + goto unimplemented_insn; + } + break; + + case 0xe9: + switch ( vex.pfx ) + { + case vex_f2: /* xresldtrk */ + vcpu_must_have(tsxldtrk); + /* + * We're never in a transactional region when coming here + * - nothing else to do. + */ + break; default: goto unimplemented_insn; } --- a/xen/include/public/arch-x86/cpufeatureset.h +++ b/xen/include/public/arch-x86/cpufeatureset.h @@ -238,6 +238,7 @@ XEN_CPUFEATURE(VPCLMULQDQ, 6*32+10) / XEN_CPUFEATURE(AVX512_VNNI, 6*32+11) /*A Vector Neural Network Instrs */ XEN_CPUFEATURE(AVX512_BITALG, 6*32+12) /*A Support for VPOPCNT[B,W] and VPSHUFBITQMB */ XEN_CPUFEATURE(AVX512_VPOPCNTDQ, 6*32+14) /*A POPCNT for vectors of DW/QW */ +XEN_CPUFEATURE(TSXLDTRK, 6*32+16) /*a TSX load tracking suspend/resume insns */ XEN_CPUFEATURE(RDPID, 6*32+22) /*A RDPID instruction */ XEN_CPUFEATURE(CLDEMOTE, 6*32+25) /*A CLDEMOTE instruction */ XEN_CPUFEATURE(MOVDIRI, 6*32+27) /*A MOVDIRI instruction */ --- a/xen/tools/gen-cpuid.py +++ b/xen/tools/gen-cpuid.py @@ -289,6 +289,9 @@ def crunch_numbers(state): # as dependent features simplifies Xen's logic, and prevents the guest # from seeing implausible configurations. IBRSB: [STIBP, SSBD], + + # In principle the TSXLDTRK insns could also be considered independent. + RTM: [TSXLDTRK], } deep_features = tuple(sorted(deps.keys())) From patchwork Thu May 14 09:10:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11548355 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 75AAB618 for ; Thu, 14 May 2020 09:11:12 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 50AF220675 for ; Thu, 14 May 2020 09:11:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 50AF220675 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9sk-0002EF-Qo; Thu, 14 May 2020 09:10:22 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9sj-0002Dz-Rv for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:10:21 +0000 X-Inumbo-ID: bc465a22-95c2-11ea-b07b-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id bc465a22-95c2-11ea-b07b-bc764e2007e4; Thu, 14 May 2020 09:10:20 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 81AACAF48; Thu, 14 May 2020 09:10:22 +0000 (UTC) Subject: [PATCH v9 7/9] x86emul: support FNSTENV and FNSAVE From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: Message-ID: <29dd611f-cd5d-464b-2046-9c73160d4cea@suse.com> Date: Thu, 14 May 2020 11:10:17 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To avoid introducing another boolean into emulator state, the rex_prefix field gets (ab)used to convey the real/VM86 vs protected mode info (affecting structure layout, albeit not size) to x86_emul_blk(). Signed-off-by: Jan Beulich --- TBD: The full 16-bit padding fields in the 32-bit structures get filled with all ones by modern CPUs (i.e. unlike what the comment says for FIP and FDP). We may want to mirror this as well (for the real mode variant), even if those fields' contents are unspecified. --- v9: Fix !HVM build. Add /*state->*/ comments. Add memset(). Add/extend comments. v7: New. --- a/tools/tests/x86_emulator/x86-emulate.h +++ b/tools/tests/x86_emulator/x86-emulate.h @@ -123,6 +123,7 @@ static inline bool xcr0_mask(uint64_t ma } #define cache_line_size() (cp.basic.clflush_size * 8) +#define cpu_has_fpu cp.basic.fpu #define cpu_has_mmx cp.basic.mmx #define cpu_has_fxsr cp.basic.fxsr #define cpu_has_sse cp.basic.sse --- a/tools/tests/x86_emulator/test_x86_emulator.c +++ b/tools/tests/x86_emulator/test_x86_emulator.c @@ -748,6 +748,25 @@ static struct x86_emulate_ops emulops = #define MMAP_ADDR 0x100000 +/* + * 64-bit OSes may not (be able to) properly restore the two selectors in + * the FPU environment. Zap them so that memcmp() on two saved images will + * work regardless of whether a context switch occurred in the middle. + */ +static void zap_fpsel(unsigned int *env, bool is_32bit) +{ + if ( is_32bit ) + { + env[4] &= ~0xffff; + env[6] &= ~0xffff; + } + else + { + env[2] &= ~0xffff; + env[3] &= ~0xffff; + } +} + #ifdef __x86_64__ # define STKVAL_DISP 64 static const struct { @@ -2394,6 +2413,62 @@ int main(int argc, char **argv) printf("okay\n"); } else + printf("skipped\n"); + + printf("%-40s", "Testing fnstenv 4(%ecx)..."); + if ( stack_exec && cpu_has_fpu ) + { + const uint16_t three = 3; + + asm volatile ( "fninit\n\t" + "fld1\n\t" + "fidivs %1\n\t" + "fstenv %0" + : "=m" (res[9]) : "m" (three) : "memory" ); + zap_fpsel(&res[9], true); + instr[0] = 0xd9; instr[1] = 0x71; instr[2] = 0x04; + regs.eip = (unsigned long)&instr[0]; + regs.ecx = (unsigned long)res; + res[8] = 0xaa55aa55; + rc = x86_emulate(&ctxt, &emulops); + zap_fpsel(&res[1], true); + if ( (rc != X86EMUL_OKAY) || + memcmp(res + 1, res + 9, 28) || + res[8] != 0xaa55aa55 || + (regs.eip != (unsigned long)&instr[3]) ) + goto fail; + printf("okay\n"); + } + else + printf("skipped\n"); + + printf("%-40s", "Testing 16-bit fnsave (%ecx)..."); + if ( stack_exec && cpu_has_fpu ) + { + const uint16_t five = 5; + + asm volatile ( "fninit\n\t" + "fld1\n\t" + "fidivs %1\n\t" + "fsaves %0" + : "=m" (res[25]) : "m" (five) : "memory" ); + zap_fpsel(&res[25], false); + asm volatile ( "frstors %0" :: "m" (res[25]) : "memory" ); + instr[0] = 0x66; instr[1] = 0xdd; instr[2] = 0x31; + regs.eip = (unsigned long)&instr[0]; + regs.ecx = (unsigned long)res; + res[23] = 0xaa55aa55; + res[24] = 0xaa55aa55; + rc = x86_emulate(&ctxt, &emulops); + if ( (rc != X86EMUL_OKAY) || + memcmp(res, res + 25, 94) || + (res[23] >> 16) != 0xaa55 || + res[24] != 0xaa55aa55 || + (regs.eip != (unsigned long)&instr[3]) ) + goto fail; + printf("okay\n"); + } + else printf("skipped\n"); printf("%-40s", "Testing movq %mm3,(%ecx)..."); --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -856,6 +856,9 @@ struct x86_emulate_state { enum { blk_NONE, blk_enqcmd, +#ifndef X86EMUL_NO_FPU + blk_fst, /* FNSTENV, FNSAVE */ +#endif blk_movdir, } blk; uint8_t modrm, modrm_mod, modrm_reg, modrm_rm; @@ -897,6 +900,50 @@ struct x86_emulate_state { #define PTR_POISON NULL /* 32-bit builds are for user-space, so NULL is OK. */ #endif +#ifndef X86EMUL_NO_FPU +struct x87_env16 { + uint16_t fcw; + uint16_t fsw; + uint16_t ftw; + union { + struct { + uint16_t fip_lo; + uint16_t fop:11, :1, fip_hi:4; + uint16_t fdp_lo; + uint16_t :12, fdp_hi:4; + } real; + struct { + uint16_t fip; + uint16_t fcs; + uint16_t fdp; + uint16_t fds; + } prot; + } mode; +}; + +struct x87_env32 { + uint32_t fcw:16, :16; + uint32_t fsw:16, :16; + uint32_t ftw:16, :16; + union { + struct { + /* some CPUs/FPUs also store the full FIP here */ + uint32_t fip_lo:16, :16; + uint32_t fop:11, :1, fip_hi:16, :4; + /* some CPUs/FPUs also store the full FDP here */ + uint32_t fdp_lo:16, :16; + uint32_t :12, fdp_hi:16, :4; + } real; + struct { + uint32_t fip; + uint32_t fcs:16, fop:11, :5; + uint32_t fdp; + uint32_t fds:16, :16; + } prot; + } mode; +}; +#endif + typedef union { uint64_t mmx; uint64_t __attribute__ ((aligned(16))) xmm[2]; @@ -4912,9 +4959,22 @@ x86_emulate( goto done; emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val); break; - case 6: /* fnstenv - TODO */ + case 6: /* fnstenv */ + fail_if(!ops->blk); + state->blk = blk_fst; + /* + * REX is meaningless for this insn by this point - (ab)use + * the field to communicate real vs protected mode to ->blk(). + */ + /*state->*/rex_prefix = in_protmode(ctxt, ops); + if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL, + op_bytes > 2 ? sizeof(struct x87_env32) + : sizeof(struct x87_env16), + &_regs.eflags, + state, ctxt)) != X86EMUL_OKAY ) + goto done; state->fpu_ctrl = true; - goto unimplemented_insn; + break; case 7: /* fnstcw m2byte */ state->fpu_ctrl = true; fpu_memdst16: @@ -5068,9 +5128,24 @@ x86_emulate( emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val); break; case 4: /* frstor - TODO */ - case 6: /* fnsave - TODO */ state->fpu_ctrl = true; goto unimplemented_insn; + case 6: /* fnsave */ + fail_if(!ops->blk); + state->blk = blk_fst; + /* + * REX is meaningless for this insn by this point - (ab)use + * the field to communicate real vs protected mode to ->blk(). + */ + /*state->*/rex_prefix = in_protmode(ctxt, ops); + if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL, + op_bytes > 2 ? sizeof(struct x87_env32) + 80 + : sizeof(struct x87_env16) + 80, + &_regs.eflags, + state, ctxt)) != X86EMUL_OKAY ) + goto done; + state->fpu_ctrl = true; + break; case 7: /* fnstsw m2byte */ state->fpu_ctrl = true; goto fpu_memdst16; @@ -11550,6 +11625,14 @@ int x86_emul_blk( switch ( state->blk ) { bool zf; +#ifndef X86EMUL_NO_FPU + struct { + struct x87_env32 env; + struct { + uint8_t bytes[10]; + } freg[8]; + } fpstate; +#endif /* * Throughout this switch(), memory clobbers are used to compensate @@ -11579,6 +11662,98 @@ int x86_emul_blk( *eflags |= X86_EFLAGS_ZF; break; +#ifndef X86EMUL_NO_FPU + + case blk_fst: + ASSERT(!data); + + /* Don't chance consuming uninitialized data. */ + memset(&fpstate, 0, sizeof(fpstate)); + if ( bytes > sizeof(fpstate.env) ) + asm ( "fnsave %0" : "+m" (fpstate) ); + else + asm ( "fnstenv %0" : "+m" (fpstate.env) ); + + /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */ + switch ( bytes ) + { + case sizeof(fpstate.env): /* 32-bit FNSTENV */ + case sizeof(fpstate): /* 32-bit FNSAVE */ + if ( !state->rex_prefix ) + { + /* Convert 32-bit prot to 32-bit real/vm86 format. */ + unsigned int fip = fpstate.env.mode.prot.fip + + (fpstate.env.mode.prot.fcs << 4); + unsigned int fdp = fpstate.env.mode.prot.fdp + + (fpstate.env.mode.prot.fds << 4); + unsigned int fop = fpstate.env.mode.prot.fop; + + memset(&fpstate.env.mode, 0, sizeof(fpstate.env.mode)); + fpstate.env.mode.real.fip_lo = fip; + fpstate.env.mode.real.fip_hi = fip >> 16; + fpstate.env.mode.real.fop = fop; + fpstate.env.mode.real.fdp_lo = fdp; + fpstate.env.mode.real.fdp_hi = fdp >> 16; + } + memcpy(ptr, &fpstate.env, sizeof(fpstate.env)); + if ( bytes == sizeof(fpstate.env) ) + ptr = NULL; + else + ptr += sizeof(fpstate.env); + break; + + case sizeof(struct x87_env16): /* 16-bit FNSTENV */ + case sizeof(struct x87_env16) + sizeof(fpstate.freg): /* 16-bit FNSAVE */ + if ( state->rex_prefix ) + { + /* Convert 32-bit prot to 16-bit prot format. */ + struct x87_env16 *env = ptr; + + env->fcw = fpstate.env.fcw; + env->fsw = fpstate.env.fsw; + env->ftw = fpstate.env.ftw; + env->mode.prot.fip = fpstate.env.mode.prot.fip; + env->mode.prot.fcs = fpstate.env.mode.prot.fcs; + env->mode.prot.fdp = fpstate.env.mode.prot.fdp; + env->mode.prot.fds = fpstate.env.mode.prot.fds; + } + else + { + /* Convert 32-bit prot to 16-bit real/vm86 format. */ + unsigned int fip = fpstate.env.mode.prot.fip + + (fpstate.env.mode.prot.fcs << 4); + unsigned int fdp = fpstate.env.mode.prot.fdp + + (fpstate.env.mode.prot.fds << 4); + struct x87_env16 env = { + .fcw = fpstate.env.fcw, + .fsw = fpstate.env.fsw, + .ftw = fpstate.env.ftw, + .mode.real.fip_lo = fip, + .mode.real.fip_hi = fip >> 16, + .mode.real.fop = fpstate.env.mode.prot.fop, + .mode.real.fdp_lo = fdp, + .mode.real.fdp_hi = fdp >> 16 + }; + + memcpy(ptr, &env, sizeof(env)); + } + if ( bytes == sizeof(struct x87_env16) ) + ptr = NULL; + else + ptr += sizeof(struct x87_env16); + break; + + default: + ASSERT_UNREACHABLE(); + return X86EMUL_UNHANDLEABLE; + } + + if ( ptr ) + memcpy(ptr, fpstate.freg, sizeof(fpstate.freg)); + break; + +#endif /* X86EMUL_NO_FPU */ + case blk_movdir: switch ( bytes ) { From patchwork Thu May 14 09:11:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11548359 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6BEBB618 for ; Thu, 14 May 2020 09:12:02 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5197F20675 for ; Thu, 14 May 2020 09:12:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5197F20675 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9ta-0002Mj-8O; Thu, 14 May 2020 09:11:14 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9tY-0002MX-KU for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:11:12 +0000 X-Inumbo-ID: da377976-95c2-11ea-a463-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id da377976-95c2-11ea-a463-12813bfff9fa; Thu, 14 May 2020 09:11:11 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id D3800AA4F; Thu, 14 May 2020 09:11:12 +0000 (UTC) Subject: [PATCH v9 8/9] x86emul: support FLDENV and FRSTOR From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: Message-ID: Date: Thu, 14 May 2020 11:11:08 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" While the Intel SDM claims that FRSTOR itself may raise #MF upon completion, this was confirmed by Intel to be a doc error which will be corrected in due course; behavior is like FLDENV, and like old hard copy manuals describe it. Re-arrange a switch() statement's case label order to allow for fall-through from FLDENV handling to FNSTENV's. Signed-off-by: Jan Beulich --- v9: Refine description. Re-base over changes to earlier patch. Add comments. v7: New. --- a/tools/tests/x86_emulator/test_x86_emulator.c +++ b/tools/tests/x86_emulator/test_x86_emulator.c @@ -2442,6 +2442,27 @@ int main(int argc, char **argv) else printf("skipped\n"); + printf("%-40s", "Testing fldenv 8(%edx)..."); + if ( stack_exec && cpu_has_fpu ) + { + asm volatile ( "fnstenv %0\n\t" + "fninit" + : "=m" (res[2]) :: "memory" ); + zap_fpsel(&res[2], true); + instr[0] = 0xd9; instr[1] = 0x62; instr[2] = 0x08; + regs.eip = (unsigned long)&instr[0]; + regs.edx = (unsigned long)res; + rc = x86_emulate(&ctxt, &emulops); + asm volatile ( "fnstenv %0" : "=m" (res[9]) :: "memory" ); + if ( (rc != X86EMUL_OKAY) || + memcmp(res + 2, res + 9, 28) || + (regs.eip != (unsigned long)&instr[3]) ) + goto fail; + printf("okay\n"); + } + else + printf("skipped\n"); + printf("%-40s", "Testing 16-bit fnsave (%ecx)..."); if ( stack_exec && cpu_has_fpu ) { @@ -2468,6 +2489,31 @@ int main(int argc, char **argv) goto fail; printf("okay\n"); } + else + printf("skipped\n"); + + printf("%-40s", "Testing frstor (%edx)..."); + if ( stack_exec && cpu_has_fpu ) + { + const uint16_t seven = 7; + + asm volatile ( "fninit\n\t" + "fld1\n\t" + "fidivs %1\n\t" + "fnsave %0\n\t" + : "=&m" (res[0]) : "m" (seven) : "memory" ); + zap_fpsel(&res[0], true); + instr[0] = 0xdd; instr[1] = 0x22; + regs.eip = (unsigned long)&instr[0]; + regs.edx = (unsigned long)res; + rc = x86_emulate(&ctxt, &emulops); + asm volatile ( "fnsave %0" : "=m" (res[27]) :: "memory" ); + if ( (rc != X86EMUL_OKAY) || + memcmp(res, res + 27, 108) || + (regs.eip != (unsigned long)&instr[2]) ) + goto fail; + printf("okay\n"); + } else printf("skipped\n"); --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -857,6 +857,7 @@ struct x86_emulate_state { blk_NONE, blk_enqcmd, #ifndef X86EMUL_NO_FPU + blk_fld, /* FLDENV, FRSTOR */ blk_fst, /* FNSTENV, FNSAVE */ #endif blk_movdir, @@ -4948,22 +4949,15 @@ x86_emulate( dst.bytes = 4; emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val); break; - case 4: /* fldenv - TODO */ - state->fpu_ctrl = true; - goto unimplemented_insn; - case 5: /* fldcw m2byte */ - state->fpu_ctrl = true; - fpu_memsrc16: - if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val, - 2, ctxt)) != X86EMUL_OKAY ) - goto done; - emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val); - break; + case 4: /* fldenv */ + /* Raise #MF now if there are pending unmasked exceptions. */ + emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */); + /* fall through */ case 6: /* fnstenv */ fail_if(!ops->blk); - state->blk = blk_fst; + state->blk = modrm_reg & 2 ? blk_fst : blk_fld; /* - * REX is meaningless for this insn by this point - (ab)use + * REX is meaningless for these insns by this point - (ab)use * the field to communicate real vs protected mode to ->blk(). */ /*state->*/rex_prefix = in_protmode(ctxt, ops); @@ -4975,6 +4969,14 @@ x86_emulate( goto done; state->fpu_ctrl = true; break; + case 5: /* fldcw m2byte */ + state->fpu_ctrl = true; + fpu_memsrc16: + if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val, + 2, ctxt)) != X86EMUL_OKAY ) + goto done; + emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val); + break; case 7: /* fnstcw m2byte */ state->fpu_ctrl = true; fpu_memdst16: @@ -5127,14 +5129,15 @@ x86_emulate( dst.bytes = 8; emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val); break; - case 4: /* frstor - TODO */ - state->fpu_ctrl = true; - goto unimplemented_insn; + case 4: /* frstor */ + /* Raise #MF now if there are pending unmasked exceptions. */ + emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */); + /* fall through */ case 6: /* fnsave */ fail_if(!ops->blk); - state->blk = blk_fst; + state->blk = modrm_reg & 2 ? blk_fst : blk_fld; /* - * REX is meaningless for this insn by this point - (ab)use + * REX is meaningless for these insns by this point - (ab)use * the field to communicate real vs protected mode to ->blk(). */ /*state->*/rex_prefix = in_protmode(ctxt, ops); @@ -11664,6 +11667,92 @@ int x86_emul_blk( #ifndef X86EMUL_NO_FPU + case blk_fld: + ASSERT(!data); + + /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */ + switch ( bytes ) + { + case sizeof(fpstate.env): /* 32-bit FLDENV */ + case sizeof(fpstate): /* 32-bit FRSTOR */ + memcpy(&fpstate.env, ptr, sizeof(fpstate.env)); + if ( !state->rex_prefix ) + { + /* Convert 32-bit real/vm86 to 32-bit prot format. */ + unsigned int fip = fpstate.env.mode.real.fip_lo + + (fpstate.env.mode.real.fip_hi << 16); + unsigned int fdp = fpstate.env.mode.real.fdp_lo + + (fpstate.env.mode.real.fdp_hi << 16); + unsigned int fop = fpstate.env.mode.real.fop; + + fpstate.env.mode.prot.fip = fip & 0xf; + fpstate.env.mode.prot.fcs = fip >> 4; + fpstate.env.mode.prot.fop = fop; + fpstate.env.mode.prot.fdp = fdp & 0xf; + fpstate.env.mode.prot.fds = fdp >> 4; + } + + if ( bytes == sizeof(fpstate.env) ) + ptr = NULL; + else + ptr += sizeof(fpstate.env); + break; + + case sizeof(struct x87_env16): /* 16-bit FLDENV */ + case sizeof(struct x87_env16) + sizeof(fpstate.freg): /* 16-bit FRSTOR */ + { + const struct x87_env16 *env = ptr; + + fpstate.env.fcw = env->fcw; + fpstate.env.fsw = env->fsw; + fpstate.env.ftw = env->ftw; + + if ( state->rex_prefix ) + { + /* Convert 16-bit prot to 32-bit prot format. */ + fpstate.env.mode.prot.fip = env->mode.prot.fip; + fpstate.env.mode.prot.fcs = env->mode.prot.fcs; + fpstate.env.mode.prot.fdp = env->mode.prot.fdp; + fpstate.env.mode.prot.fds = env->mode.prot.fds; + fpstate.env.mode.prot.fop = 0; /* unknown */ + } + else + { + /* Convert 16-bit real/vm86 to 32-bit prot format. */ + unsigned int fip = env->mode.real.fip_lo + + (env->mode.real.fip_hi << 16); + unsigned int fdp = env->mode.real.fdp_lo + + (env->mode.real.fdp_hi << 16); + unsigned int fop = env->mode.real.fop; + + fpstate.env.mode.prot.fip = fip & 0xf; + fpstate.env.mode.prot.fcs = fip >> 4; + fpstate.env.mode.prot.fop = fop; + fpstate.env.mode.prot.fdp = fdp & 0xf; + fpstate.env.mode.prot.fds = fdp >> 4; + } + + if ( bytes == sizeof(*env) ) + ptr = NULL; + else + ptr += sizeof(*env); + break; + } + + default: + ASSERT_UNREACHABLE(); + return X86EMUL_UNHANDLEABLE; + } + + if ( ptr ) + { + memcpy(fpstate.freg, ptr, sizeof(fpstate.freg)); + asm volatile ( "frstor %0" :: "m" (fpstate) ); + } + else + asm volatile ( "fldenv %0" :: "m" (fpstate.env) ); + break; + case blk_fst: ASSERT(!data); From patchwork Thu May 14 09:11:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11548361 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9ACA1912 for ; Thu, 14 May 2020 09:12:15 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6CD8B20675 for ; Thu, 14 May 2020 09:12:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6CD8B20675 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9ty-0002Qp-Gy; Thu, 14 May 2020 09:11:38 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZ9tw-0002Qa-P8 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:11:36 +0000 X-Inumbo-ID: e8e47d20-95c2-11ea-a463-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id e8e47d20-95c2-11ea-a463-12813bfff9fa; Thu, 14 May 2020 09:11:35 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 4DC5CAF33; Thu, 14 May 2020 09:11:37 +0000 (UTC) Subject: [PATCH v9 9/9] x86emul: support FXSAVE/FXRSTOR From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: Message-ID: <6747df64-dce8-8a24-0986-81ddfe44b285@suse.com> Date: Thu, 14 May 2020 11:11:32 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Note that FPU selector handling as well as MXCSR mask saving for now does not honor differences between host and guest visible featuresets. While for Intel operation of the insns with CR4.OSFXSR=0 is implementation dependent, use the easiest solution there: Simply don't look at the bit in the first place. For AMD and alike the behavior is well defined, so it gets handled together with FFXSR. Signed-off-by: Jan Beulich --- v9: Change a few field types in struct x86_fxsr. Leave reserved fields either entirely unnamed, or named "rsvd". Set state->fpu_ctrl. Avoid memory clobbers. Add memset() to FXSAVE logic. Add comments. v8: Respect EFER.FFXSE and CR4.OSFXSR. Correct wrong X86EMUL_NO_* dependencies. Reduce #ifdef-ary. v7: New. --- a/tools/tests/x86_emulator/test_x86_emulator.c +++ b/tools/tests/x86_emulator/test_x86_emulator.c @@ -767,6 +767,12 @@ static void zap_fpsel(unsigned int *env, } } +static void zap_xfpsel(unsigned int *env) +{ + env[3] &= ~0xffff; + env[5] &= ~0xffff; +} + #ifdef __x86_64__ # define STKVAL_DISP 64 static const struct { @@ -2517,6 +2523,91 @@ int main(int argc, char **argv) else printf("skipped\n"); + printf("%-40s", "Testing fxsave 4(%ecx)..."); + if ( stack_exec && cpu_has_fxsr ) + { + const uint16_t nine = 9; + + memset(res + 0x80, 0xcc, 0x400); + if ( cpu_has_sse2 ) + asm volatile ( "pcmpeqd %xmm7, %xmm7\n\t" + "pxor %xmm6, %xmm6\n\t" + "psubw %xmm7, %xmm6" ); + asm volatile ( "fninit\n\t" + "fld1\n\t" + "fidivs %1\n\t" + "fxsave %0" + : "=m" (res[0x100]) : "m" (nine) : "memory" ); + zap_xfpsel(&res[0x100]); + instr[0] = 0x0f; instr[1] = 0xae; instr[2] = 0x41; instr[3] = 0x04; + regs.eip = (unsigned long)&instr[0]; + regs.ecx = (unsigned long)(res + 0x7f); + memset(res + 0x100 + 0x74, 0x33, 0x30); + memset(res + 0x80 + 0x74, 0x33, 0x30); + rc = x86_emulate(&ctxt, &emulops); + zap_xfpsel(&res[0x80]); + if ( (rc != X86EMUL_OKAY) || + memcmp(res + 0x80, res + 0x100, 0x200) || + (regs.eip != (unsigned long)&instr[4]) ) + goto fail; + printf("okay\n"); + } + else + printf("skipped\n"); + + printf("%-40s", "Testing fxrstor -4(%ecx)..."); + if ( stack_exec && cpu_has_fxsr ) + { + const uint16_t eleven = 11; + + memset(res + 0x80, 0xcc, 0x400); + asm volatile ( "fxsave %0" : "=m" (res[0x80]) :: "memory" ); + zap_xfpsel(&res[0x80]); + if ( cpu_has_sse2 ) + asm volatile ( "pxor %xmm7, %xmm6\n\t" + "pxor %xmm7, %xmm3\n\t" + "pxor %xmm7, %xmm0\n\t" + "pxor %xmm7, %xmm7" ); + asm volatile ( "fninit\n\t" + "fld1\n\t" + "fidivs %0\n\t" + :: "m" (eleven) ); + instr[0] = 0x0f; instr[1] = 0xae; instr[2] = 0x49; instr[3] = 0xfc; + regs.eip = (unsigned long)&instr[0]; + regs.ecx = (unsigned long)(res + 0x81); + rc = x86_emulate(&ctxt, &emulops); + asm volatile ( "fxsave %0" : "=m" (res[0x100]) :: "memory" ); + if ( (rc != X86EMUL_OKAY) || + memcmp(res + 0x100, res + 0x80, 0x200) || + (regs.eip != (unsigned long)&instr[4]) ) + goto fail; + printf("okay\n"); + } + else + printf("skipped\n"); + +#ifdef __x86_64__ + printf("%-40s", "Testing fxsaveq 8(%edx)..."); + if ( stack_exec && cpu_has_fxsr ) + { + memset(res + 0x80, 0xcc, 0x400); + asm volatile ( "fxsaveq %0" : "=m" (res[0x100]) :: "memory" ); + instr[0] = 0x48; instr[1] = 0x0f; instr[2] = 0xae; instr[3] = 0x42; instr[4] = 0x08; + regs.eip = (unsigned long)&instr[0]; + regs.edx = (unsigned long)(res + 0x7e); + memset(res + 0x100 + 0x74, 0x33, 0x30); + memset(res + 0x80 + 0x74, 0x33, 0x30); + rc = x86_emulate(&ctxt, &emulops); + if ( (rc != X86EMUL_OKAY) || + memcmp(res + 0x80, res + 0x100, 0x200) || + (regs.eip != (unsigned long)&instr[5]) ) + goto fail; + printf("okay\n"); + } + else + printf("skipped\n"); +#endif + printf("%-40s", "Testing movq %mm3,(%ecx)..."); if ( stack_exec && cpu_has_mmx ) { --- a/tools/tests/x86_emulator/x86-emulate.c +++ b/tools/tests/x86_emulator/x86-emulate.c @@ -30,6 +30,13 @@ struct cpuid_policy cp; static char fpu_save_area[4096] __attribute__((__aligned__((64)))); static bool use_xsave; +/* + * Re-use the area above also as scratch space for the emulator itself. + * (When debugging the emulator, care needs to be taken when inserting + * printf() or alike function calls into regions using this.) + */ +#define FXSAVE_AREA ((struct x86_fxsr *)fpu_save_area) + void emul_save_fpu_state(void) { if ( use_xsave ) --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -860,6 +860,11 @@ struct x86_emulate_state { blk_fld, /* FLDENV, FRSTOR */ blk_fst, /* FNSTENV, FNSAVE */ #endif +#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \ + !defined(X86EMUL_NO_SIMD) + blk_fxrstor, + blk_fxsave, +#endif blk_movdir, } blk; uint8_t modrm, modrm_mod, modrm_reg, modrm_rm; @@ -953,6 +958,29 @@ typedef union { uint32_t data32[16]; } mmval_t; +struct x86_fxsr { + uint16_t fcw; + uint16_t fsw; + uint8_t ftw, :8; + uint16_t fop; + union { + struct { + uint32_t offs; + uint16_t sel, :16; + }; + uint64_t addr; + } fip, fdp; + uint32_t mxcsr; + uint32_t mxcsr_mask; + struct { + uint8_t data[10]; + uint16_t :16, :16, :16; + } fpreg[8]; + uint64_t __attribute__ ((aligned(16))) xmm[16][2]; + uint64_t rsvd[6]; + uint64_t avl[6]; +}; + /* * While proper alignment gets specified above, this doesn't get honored by * the compiler for automatic variables. Use this helper to instantiate a @@ -1910,6 +1938,7 @@ amd_like(const struct x86_emulate_ctxt * #define vcpu_has_cmov() (ctxt->cpuid->basic.cmov) #define vcpu_has_clflush() (ctxt->cpuid->basic.clflush) #define vcpu_has_mmx() (ctxt->cpuid->basic.mmx) +#define vcpu_has_fxsr() (ctxt->cpuid->basic.fxsr) #define vcpu_has_sse() (ctxt->cpuid->basic.sse) #define vcpu_has_sse2() (ctxt->cpuid->basic.sse2) #define vcpu_has_sse3() (ctxt->cpuid->basic.sse3) @@ -8139,6 +8168,49 @@ x86_emulate( case X86EMUL_OPC(0x0f, 0xae): case X86EMUL_OPC_66(0x0f, 0xae): /* Grp15 */ switch ( modrm_reg & 7 ) { +#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \ + !defined(X86EMUL_NO_SIMD) + case 0: /* fxsave */ + case 1: /* fxrstor */ + generate_exception_if(vex.pfx, EXC_UD); + vcpu_must_have(fxsr); + generate_exception_if(ea.type != OP_MEM, EXC_UD); + generate_exception_if(!is_aligned(ea.mem.seg, ea.mem.off, 16, + ctxt, ops), + EXC_GP, 0); + fail_if(!ops->blk); + op_bytes = +#ifdef __x86_64__ + !mode_64bit() ? offsetof(struct x86_fxsr, xmm[8]) : +#endif + sizeof(struct x86_fxsr); + if ( amd_like(ctxt) ) + { + /* Assume "normal" operation in case of missing hooks. */ + if ( !ops->read_cr || + ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY ) + cr4 = X86_CR4_OSFXSR; + if ( !ops->read_msr || + ops->read_msr(MSR_EFER, &msr_val, ctxt) != X86EMUL_OKAY ) + msr_val = 0; + if ( !(cr4 & X86_CR4_OSFXSR) || + (mode_64bit() && mode_ring0() && (msr_val & EFER_FFXSE)) ) + op_bytes = offsetof(struct x86_fxsr, xmm[0]); + } + /* + * This could also be X86EMUL_FPU_mmx, but it shouldn't be + * X86EMUL_FPU_xmm, as we don't want CR4.OSFXSR checked. + */ + get_fpu(X86EMUL_FPU_fpu); + state->fpu_ctrl = true; + state->blk = modrm_reg & 1 ? blk_fxrstor : blk_fxsave; + if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL, + sizeof(struct x86_fxsr), &_regs.eflags, + state, ctxt)) != X86EMUL_OKAY ) + goto done; + break; +#endif /* X86EMUL_NO_{FPU,MMX,SIMD} */ + #ifndef X86EMUL_NO_SIMD case 2: /* ldmxcsr */ generate_exception_if(vex.pfx, EXC_UD); @@ -11625,6 +11697,8 @@ int x86_emul_blk( struct x86_emulate_state *state, struct x86_emulate_ctxt *ctxt) { + int rc = X86EMUL_OKAY; + switch ( state->blk ) { bool zf; @@ -11843,6 +11917,86 @@ int x86_emul_blk( #endif /* X86EMUL_NO_FPU */ +#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \ + !defined(X86EMUL_NO_SIMD) + + case blk_fxrstor: + { + struct x86_fxsr *fxsr = FXSAVE_AREA; + + ASSERT(!data); + ASSERT(bytes == sizeof(*fxsr)); + ASSERT(state->op_bytes <= bytes); + + if ( state->op_bytes < sizeof(*fxsr) ) + { + if ( state->rex_prefix & REX_W ) + { + /* + * The only way to force fxsaveq on a wide range of gas + * versions. On older versions the rex64 prefix works only if + * we force an addressing mode that doesn't require extended + * registers. + */ + asm volatile ( ".byte 0x48; fxsave (%1)" + : "=m" (*fxsr) : "R" (fxsr) ); + } + else + asm volatile ( "fxsave %0" : "=m" (*fxsr) ); + } + + /* + * Don't chance the reserved or available ranges to contain any + * data FXRSTOR may actually consume in some way: Copy only the + * defined portion, and zero the rest. + */ + memcpy(fxsr, ptr, min(state->op_bytes, + (unsigned int)offsetof(struct x86_fxsr, rsvd))); + memset(fxsr->rsvd, 0, sizeof(*fxsr) - offsetof(struct x86_fxsr, rsvd)); + + generate_exception_if(fxsr->mxcsr & ~mxcsr_mask, EXC_GP, 0); + + if ( state->rex_prefix & REX_W ) + { + /* See above for why operand/constraints are this way. */ + asm volatile ( ".byte 0x48; fxrstor (%1)" + :: "m" (*fxsr), "R" (fxsr) ); + } + else + asm volatile ( "fxrstor %0" :: "m" (*fxsr) ); + break; + } + + case blk_fxsave: + { + struct x86_fxsr *fxsr = FXSAVE_AREA; + + ASSERT(!data); + ASSERT(bytes == sizeof(*fxsr)); + ASSERT(state->op_bytes <= bytes); + + if ( state->op_bytes < sizeof(*fxsr) ) + /* Don't chance consuming uninitialized data. */ + memset(fxsr, 0, state->op_bytes); + else + fxsr = ptr; + + if ( state->rex_prefix & REX_W ) + { + /* See above for why operand/constraints are this way. */ + asm volatile ( ".byte 0x48; fxsave (%1)" + : "=m" (*fxsr) : "R" (fxsr) ); + } + else + asm volatile ( "fxsave %0" : "=m" (*fxsr) ); + + if ( fxsr != ptr ) /* i.e. state->op_bytes < sizeof(*fxsr) */ + memcpy(ptr, fxsr, state->op_bytes); + break; + } + +#endif /* X86EMUL_NO_{FPU,MMX,SIMD} */ + case blk_movdir: switch ( bytes ) { @@ -11896,7 +12050,8 @@ int x86_emul_blk( return X86EMUL_UNHANDLEABLE; } - return X86EMUL_OKAY; + done: + return rc; } static void __init __maybe_unused build_assertions(void) --- a/xen/arch/x86/x86_emulate.c +++ b/xen/arch/x86/x86_emulate.c @@ -42,6 +42,8 @@ } \ }) +#define FXSAVE_AREA current->arch.fpu_ctxt + #ifndef CONFIG_HVM # define X86EMUL_NO_FPU # define X86EMUL_NO_MMX