From patchwork Fri Apr 21 00:49:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 9691519 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A16686038D for ; Fri, 21 Apr 2017 00:50:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9366F2847B for ; Fri, 21 Apr 2017 00:50:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 886DD28480; Fri, 21 Apr 2017 00:50:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C94D02847C for ; Fri, 21 Apr 2017 00:50:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1033850AbdDUAuZ (ORCPT ); Thu, 20 Apr 2017 20:50:25 -0400 Received: from mail-io0-f175.google.com ([209.85.223.175]:36271 "EHLO mail-io0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1033580AbdDUAuS (ORCPT ); Thu, 20 Apr 2017 20:50:18 -0400 Received: by mail-io0-f175.google.com with SMTP id o22so104068545iod.3 for ; Thu, 20 Apr 2017 17:50:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Dq8MC2icof7AVUqhDlZ8cwH8AwVqeJce4uyoSInmP00=; b=uTlPdejIVGXEiOLu56bYXTLlfFyZ/N94e9xbWjKSI5Ud6I6gNUi4oMG4PxZe1exEkz xBfGxLbyx1T6ci19mHikeyRRnUKCqNeUf2UMSxqy0eskwTEAyr4syx5bf5Lp5euhUk30 hU7RmlyWDJV0sgRssrvRcx7ClqmtNsCZZqMtAk0qdAeO/537GU9JdUf/THac5Q4JCZTu 8pc9iSlswtMeeHUNeQg8yKkXUHBE02ehEgelxutdTEOMeSpXjDOz4rap42tz3rg2Bxfd UP2cq3sqsF/QibuM6sKmJ2lpZLEVDugwkq5azoBJix7rWarxbcflj/6xA+FOaDg6dSkD wqkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Dq8MC2icof7AVUqhDlZ8cwH8AwVqeJce4uyoSInmP00=; b=N5ewd1M5BC+MBrmQhkOxNAh1rDdBQQckpI0XPQ/LevMXrGCs3zdsGP+HjgBsuTvg3s PG5IbCdlvRUV3sd4K6V5LGjueBH1f9paMrwFdKtvFtuSurCE+LKWUqB/UZIOtc6q3FsP R35+pzzhLYPnDmE5tSi3XrIh0m34delSXQ5PYXoJRbfg736ipUBgNEpfpr8BjdJCGZV0 beSdwq8K1IbooViVpV7ic+8zUKkzvTISoBXkrhuhQiwqHQQU86F5ATGo8tKkNgMrfDG3 2JKrxqDcglI5diwdXVYu3YIYTJwsKDmqrFjLtGgVdAcEaFUzuoU0SCBZX9P6iG9HFToP XXHg== X-Gm-Message-State: AN3rC/4UTq7utPtARmJNsjP5Izuwcqkx2wLnRZy8PvGI41jGw+ef6ELQ 8VXDoZIDEO+8iG3CB3lfBw== X-Received: by 10.99.127.70 with SMTP id p6mr9890981pgn.169.1492735816663; Thu, 20 Apr 2017 17:50:16 -0700 (PDT) Received: from dmatlack.sea.corp.google.com ([100.100.206.82]) by smtp.gmail.com with ESMTPSA id e13sm12466486pfb.30.2017.04.20.17.50.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 20 Apr 2017 17:50:15 -0700 (PDT) From: David Matlack To: kvm@vger.kernel.org Cc: David Matlack Subject: [kvm-unit-tests PATCH 09/32] x86: basic vmwrite/vmread test Date: Thu, 20 Apr 2017 17:49:41 -0700 Message-Id: <20170421005004.137260-10-dmatlack@google.com> X-Mailer: git-send-email 2.12.2.816.g2cccc81164-goog In-Reply-To: <20170421005004.137260-1-dmatlack@google.com> References: <20170421005004.137260-1-dmatlack@google.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Issues VMWRITE to every VMCS field and the checks that VMREAD returns the expected result. Some tricky cases: read-only fields (skipped), not-yet-implemented fields (skipped, VMREAD fails with VMfailValid), guest segment access rights fields (reserved bits are zeroed by the CPU, so only check non-reserved bits). Signed-off-by: David Matlack --- x86/unittests.cfg | 6 ++ x86/vmx.c | 250 +++++++++++++++++++++++++++++++++++++++++++++++++++--- x86/vmx.h | 29 +++++++ 3 files changed, 275 insertions(+), 10 deletions(-) diff --git a/x86/unittests.cfg b/x86/unittests.cfg index 8011429d2307..7973f2f62d26 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -230,6 +230,12 @@ extra_params = -cpu host,+vmx -append test_vmptrst arch = x86_64 groups = vmx +[vmx_test_vmwrite_vmread] +file = vmx.flat +extra_params = -cpu host,+vmx -append test_vmwrite_vmread +arch = x86_64 +groups = vmx + [vmx_test_vmx_caps] file = vmx.flat extra_params = -cpu host,+vmx -append test_vmx_caps diff --git a/x86/vmx.c b/x86/vmx.c index 39a891da4635..47404fbbd782 100644 --- a/x86/vmx.c +++ b/x86/vmx.c @@ -63,6 +63,243 @@ extern void *guest_entry; static volatile u32 stage; +struct vmcs_field { + u64 mask; + u64 encoding; +}; + +#define MASK(_bits) GENMASK_ULL((_bits) - 1, 0) +#define MASK_NATURAL MASK(sizeof(unsigned long) * 8) + +static struct vmcs_field vmcs_fields[] = { + { MASK(16), VPID }, + { MASK(16), PINV }, + { MASK(16), EPTP_IDX }, + + { MASK(16), GUEST_SEL_ES }, + { MASK(16), GUEST_SEL_CS }, + { MASK(16), GUEST_SEL_SS }, + { MASK(16), GUEST_SEL_DS }, + { MASK(16), GUEST_SEL_FS }, + { MASK(16), GUEST_SEL_GS }, + { MASK(16), GUEST_SEL_LDTR }, + { MASK(16), GUEST_SEL_TR }, + { MASK(16), GUEST_INT_STATUS }, + + { MASK(16), HOST_SEL_ES }, + { MASK(16), HOST_SEL_CS }, + { MASK(16), HOST_SEL_SS }, + { MASK(16), HOST_SEL_DS }, + { MASK(16), HOST_SEL_FS }, + { MASK(16), HOST_SEL_GS }, + { MASK(16), HOST_SEL_TR }, + + { MASK(64), IO_BITMAP_A }, + { MASK(64), IO_BITMAP_B }, + { MASK(64), MSR_BITMAP }, + { MASK(64), EXIT_MSR_ST_ADDR }, + { MASK(64), EXIT_MSR_LD_ADDR }, + { MASK(64), ENTER_MSR_LD_ADDR }, + { MASK(64), VMCS_EXEC_PTR }, + { MASK(64), TSC_OFFSET }, + { MASK(64), APIC_VIRT_ADDR }, + { MASK(64), APIC_ACCS_ADDR }, + { MASK(64), EPTP }, + + { 0 /* read-only */, INFO_PHYS_ADDR }, + + { MASK(64), VMCS_LINK_PTR }, + { MASK(64), GUEST_DEBUGCTL }, + { MASK(64), GUEST_EFER }, + { MASK(64), GUEST_PAT }, + { MASK(64), GUEST_PERF_GLOBAL_CTRL }, + { MASK(64), GUEST_PDPTE }, + + { MASK(64), HOST_PAT }, + { MASK(64), HOST_EFER }, + { MASK(64), HOST_PERF_GLOBAL_CTRL }, + + { MASK(32), PIN_CONTROLS }, + { MASK(32), CPU_EXEC_CTRL0 }, + { MASK(32), EXC_BITMAP }, + { MASK(32), PF_ERROR_MASK }, + { MASK(32), PF_ERROR_MATCH }, + { MASK(32), CR3_TARGET_COUNT }, + { MASK(32), EXI_CONTROLS }, + { MASK(32), EXI_MSR_ST_CNT }, + { MASK(32), EXI_MSR_LD_CNT }, + { MASK(32), ENT_CONTROLS }, + { MASK(32), ENT_MSR_LD_CNT }, + { MASK(32), ENT_INTR_INFO }, + { MASK(32), ENT_INTR_ERROR }, + { MASK(32), ENT_INST_LEN }, + { MASK(32), TPR_THRESHOLD }, + { MASK(32), CPU_EXEC_CTRL1 }, + + { 0 /* read-only */, VMX_INST_ERROR }, + { 0 /* read-only */, EXI_REASON }, + { 0 /* read-only */, EXI_INTR_INFO }, + { 0 /* read-only */, EXI_INTR_ERROR }, + { 0 /* read-only */, IDT_VECT_INFO }, + { 0 /* read-only */, IDT_VECT_ERROR }, + { 0 /* read-only */, EXI_INST_LEN }, + { 0 /* read-only */, EXI_INST_INFO }, + + { MASK(32), GUEST_LIMIT_ES }, + { MASK(32), GUEST_LIMIT_CS }, + { MASK(32), GUEST_LIMIT_SS }, + { MASK(32), GUEST_LIMIT_DS }, + { MASK(32), GUEST_LIMIT_FS }, + { MASK(32), GUEST_LIMIT_GS }, + { MASK(32), GUEST_LIMIT_LDTR }, + { MASK(32), GUEST_LIMIT_TR }, + { MASK(32), GUEST_LIMIT_GDTR }, + { MASK(32), GUEST_LIMIT_IDTR }, + { 0x1d0ff, GUEST_AR_ES }, + { 0x1f0ff, GUEST_AR_CS }, + { 0x1d0ff, GUEST_AR_SS }, + { 0x1d0ff, GUEST_AR_DS }, + { 0x1d0ff, GUEST_AR_FS }, + { 0x1d0ff, GUEST_AR_GS }, + { 0x1d0ff, GUEST_AR_LDTR }, + { 0x1d0ff, GUEST_AR_TR }, + { MASK(32), GUEST_INTR_STATE }, + { MASK(32), GUEST_ACTV_STATE }, + { MASK(32), GUEST_SMBASE }, + { MASK(32), GUEST_SYSENTER_CS }, + { MASK(32), PREEMPT_TIMER_VALUE }, + + { MASK(32), HOST_SYSENTER_CS }, + + { MASK_NATURAL, CR0_MASK }, + { MASK_NATURAL, CR4_MASK }, + { MASK_NATURAL, CR0_READ_SHADOW }, + { MASK_NATURAL, CR4_READ_SHADOW }, + { MASK_NATURAL, CR3_TARGET_0 }, + { MASK_NATURAL, CR3_TARGET_1 }, + { MASK_NATURAL, CR3_TARGET_2 }, + { MASK_NATURAL, CR3_TARGET_3 }, + + { 0 /* read-only */, EXI_QUALIFICATION }, + { 0 /* read-only */, IO_RCX }, + { 0 /* read-only */, IO_RSI }, + { 0 /* read-only */, IO_RDI }, + { 0 /* read-only */, IO_RIP }, + { 0 /* read-only */, GUEST_LINEAR_ADDRESS }, + + { MASK_NATURAL, GUEST_CR0 }, + { MASK_NATURAL, GUEST_CR3 }, + { MASK_NATURAL, GUEST_CR4 }, + { MASK_NATURAL, GUEST_BASE_ES }, + { MASK_NATURAL, GUEST_BASE_CS }, + { MASK_NATURAL, GUEST_BASE_SS }, + { MASK_NATURAL, GUEST_BASE_DS }, + { MASK_NATURAL, GUEST_BASE_FS }, + { MASK_NATURAL, GUEST_BASE_GS }, + { MASK_NATURAL, GUEST_BASE_LDTR }, + { MASK_NATURAL, GUEST_BASE_TR }, + { MASK_NATURAL, GUEST_BASE_GDTR }, + { MASK_NATURAL, GUEST_BASE_IDTR }, + { MASK_NATURAL, GUEST_DR7 }, + { MASK_NATURAL, GUEST_RSP }, + { MASK_NATURAL, GUEST_RIP }, + { MASK_NATURAL, GUEST_RFLAGS }, + { MASK_NATURAL, GUEST_PENDING_DEBUG }, + { MASK_NATURAL, GUEST_SYSENTER_ESP }, + { MASK_NATURAL, GUEST_SYSENTER_EIP }, + + { MASK_NATURAL, HOST_CR0 }, + { MASK_NATURAL, HOST_CR3 }, + { MASK_NATURAL, HOST_CR4 }, + { MASK_NATURAL, HOST_BASE_FS }, + { MASK_NATURAL, HOST_BASE_GS }, + { MASK_NATURAL, HOST_BASE_TR }, + { MASK_NATURAL, HOST_BASE_GDTR }, + { MASK_NATURAL, HOST_BASE_IDTR }, + { MASK_NATURAL, HOST_SYSENTER_ESP }, + { MASK_NATURAL, HOST_SYSENTER_EIP }, + { MASK_NATURAL, HOST_RSP }, + { MASK_NATURAL, HOST_RIP }, +}; + +static inline u64 vmcs_field_value(struct vmcs_field *f, u8 cookie) +{ + u64 value; + + /* Incorporate the cookie and the field encoding into the value. */ + value = cookie; + value |= (f->encoding << 8); + value |= 0xdeadbeefull << 32; + + return value & f->mask; +} + +static void set_vmcs_field(struct vmcs_field *f, u8 cookie) +{ + vmcs_write(f->encoding, vmcs_field_value(f, cookie)); +} + +static bool check_vmcs_field(struct vmcs_field *f, u8 cookie) +{ + u64 expected; + u64 actual; + int ret; + + ret = vmcs_read_checking(f->encoding, &actual); + assert(!(ret & X86_EFLAGS_CF)); + /* Skip VMCS fields that aren't recognized by the CPU */ + if (ret & X86_EFLAGS_ZF) + return true; + + expected = vmcs_field_value(f, cookie); + actual &= f->mask; + + if (expected == actual) + return true; + + printf("FAIL: VMWRITE/VMREAD %lx (expected: %lx, actual: %lx)", + f->encoding, (unsigned long) expected, (unsigned long) actual); + + return false; +} + +static void set_all_vmcs_fields(u8 cookie) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(vmcs_fields); i++) + set_vmcs_field(&vmcs_fields[i], cookie); +} + +static bool check_all_vmcs_fields(u8 cookie) +{ + bool pass = true; + int i; + + for (i = 0; i < ARRAY_SIZE(vmcs_fields); i++) { + if (!check_vmcs_field(&vmcs_fields[i], cookie)) + pass = false; + } + + return pass; +} + +void test_vmwrite_vmread(void) +{ + struct vmcs *vmcs = alloc_page(); + + memset(vmcs, 0, PAGE_SIZE); + vmcs->revision_id = basic.revision; + assert(!vmcs_clear(vmcs)); + assert(!make_vmcs_current(vmcs)); + + set_all_vmcs_fields(0x42); + report("VMWRITE/VMREAD", check_all_vmcs_fields(0x42)); + + assert(!vmcs_clear(vmcs)); + free_page(vmcs); +} + void vmx_set_test_stage(u32 s) { barrier(); @@ -87,16 +324,6 @@ void vmx_inc_test_stage(void) barrier(); } -static int make_vmcs_current(struct vmcs *vmcs) -{ - bool ret; - u64 rflags = read_rflags() | X86_EFLAGS_CF | X86_EFLAGS_ZF; - - asm volatile ("push %1; popf; vmptrld %2; setbe %0" - : "=q" (ret) : "q" (rflags), "m" (vmcs) : "cc"); - return ret; -} - /* entry_sysenter */ asm( ".align 4, 0x90\n\t" @@ -1243,6 +1470,9 @@ int main(int argc, const char *argv[]) test_vmclear(); if (test_wanted("test_vmptrst", argv, argc)) test_vmptrst(); + if (test_wanted("test_vmwrite_vmread", argv, argc)) + test_vmwrite_vmread(); + init_vmcs(&vmcs_root); if (vmx_run()) { report("test vmlaunch", 0); diff --git a/x86/vmx.h b/x86/vmx.h index 52ece1aa53c8..2328f0eee05d 100644 --- a/x86/vmx.h +++ b/x86/vmx.h @@ -567,6 +567,16 @@ void vmx_set_test_stage(u32 s); u32 vmx_get_test_stage(void); void vmx_inc_test_stage(void); +static inline int make_vmcs_current(struct vmcs *vmcs) +{ + bool ret; + u64 rflags = read_rflags() | X86_EFLAGS_CF | X86_EFLAGS_ZF; + + asm volatile ("push %1; popf; vmptrld %2; setbe %0" + : "=q" (ret) : "q" (rflags), "m" (vmcs) : "cc"); + return ret; +} + static inline int vmcs_clear(struct vmcs *vmcs) { bool ret; @@ -584,6 +594,25 @@ static inline u64 vmcs_read(enum Encoding enc) return val; } +static inline int vmcs_read_checking(enum Encoding enc, u64 *value) +{ + u64 rflags = read_rflags() | X86_EFLAGS_CF | X86_EFLAGS_ZF; + u64 encoding = enc; + u64 val; + + asm volatile ("shl $8, %%rax;" + "sahf;" + "vmread %[encoding], %[val];" + "lahf;" + "shr $8, %%rax" + : /* output */ [val]"=rm"(val), "+a"(rflags) + : /* input */ [encoding]"r"(encoding) + : /* clobber */ "cc"); + + *value = val; + return rflags & (X86_EFLAGS_CF | X86_EFLAGS_ZF); +} + static inline int vmcs_write(enum Encoding enc, u64 val) { bool ret;