From patchwork Tue Aug 11 14:10:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kamil Rytarowski X-Patchwork-Id: 11709273 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F3840739 for ; Tue, 11 Aug 2020 14:14:48 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B13BF2076C for ; Tue, 11 Aug 2020 14:14:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=gmx.net header.i=@gmx.net header.b="ftNWE+ZG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B13BF2076C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=gmx.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:36678 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1k5V39-0005U5-Mq for patchwork-qemu-devel@patchwork.kernel.org; Tue, 11 Aug 2020 10:14:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:34868) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1k5V0Z-0003Dy-GK for qemu-devel@nongnu.org; Tue, 11 Aug 2020 10:12:07 -0400 Received: from mout.gmx.net ([212.227.17.20]:45409) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1k5V0X-0000zd-H5 for qemu-devel@nongnu.org; Tue, 11 Aug 2020 10:12:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1597155115; bh=UQC8hbt7Y63znu/ljFuJ4oYlMH3w497UZ5X8j02VolI=; h=X-UI-Sender-Class:From:To:Cc:Subject:Date; b=ftNWE+ZGwXqhvFCGxjnfZa3bwsh7JP5PMOmGui8HS2tLdDYhEp3XpaQJoFbOjphum Yt1VqUr76QQEciPowV8SL9s9wR1XqOqabTFGpjRAf0mkanCjNNvu5aZEljU2iYqzo2 VHYVEaaiJNPVhrcuNw7Et2ZDu+4p/LoOE8+cSkiU= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from localhost.localdomain ([89.79.191.25]) by mail.gmx.com (mrgmx105 [212.227.17.174]) with ESMTPSA (Nemesis) id 1MHXBp-1jsYZO06bZ-00DaTu; Tue, 11 Aug 2020 16:11:55 +0200 From: Kamil Rytarowski To: rth@twiddle.net, ehabkost@redhat.com, slp@redhat.com, pbonzini@redhat.com, peter.maydell@linaro.org, philmd@redhat.com, max@m00nbsd.net, jmcneill@invisible.ca Subject: [PATCH v5 1/4] Add the NVMM vcpu API Date: Tue, 11 Aug 2020 16:10:46 +0200 Message-Id: <20200811141049.15824-1-n54@gmx.com> X-Mailer: git-send-email 2.28.0 MIME-Version: 1.0 X-Provags-ID: V03:K1:/W6PhvsDbS0Y3sFFoiUwbXduk6x+G5deAPgTAkCvhGVO95HBIFI xBZM9f3Nz1XgrZJYs0e2ilJyGI6x77WsB0OpBLBADRWi0/Hy9WS+PPFE2oc085I6Rql1UPc GXfY7utFhM/v+SDEndxv86MeKU7Xe6HmkPd78L3t0porI8yxxu3T6rEkiE5UfUhfDKCFEDJ /yTJz7qcOTppRXggZkqGQ== X-UI-Out-Filterresults: notjunk:1;V03:K0:m5PkeTEsxNg=:Nvhi0DtsL4IAZn7uavKbUU gic8/KI6K7juNTYcPymBwoFefEPJ1wUChAl8TzLi8oOmBKiCVz3LpX8xVqgFYWQJIvRucCmNQ PiWquz7RaxNKjWyg9XZlsQGqHTjVJJtMucISCBEn5jvsiTLks4MejISubAuvRgW83Qo590X5c F64VZqUEUnxI7EcsVOjNs+sf25/ZcKZeCIjg00Yp/X6r410gHklLtQ+2qt3spTzXwyPIL+GbE jiXP+ppfoecHu0MEnDSCRlOkc/Xahmo3hfm3Ei+nG7r0J21ra2cWMhsCNRa/YfcRe8e7qjVed 1W0/KZKUDezsW5OYs/Bl8WtykyDxnnA1qztxFAoVVNC14S6CeQV8bbCfOZtdByYJ/nTa9TApg iF1M0SJ2JSwpSbD4b7Wt3OAfo7kDVoWC8ULusUOFgnt3MbQ0v8lQC1xwjsXVld3QtVi1PZ2Gl dwD5/mMIL0M30ZSn72iYgS59r8EtxgTAk8eVRexpjf1+fJAIBIpy5qfBYXRm0vt5kR2/TghSx yfQ0fsCYACeaeeLVWu5rat/P+UpC/2cYDbNmwPh4Q8DYwU26JYou3LqhHR+fkxt0x48hdpWm2 S4e8qA9AsycPjA7dCdbdFdYtyWRZPESjrpOZcK4ygJqmvF4jt9U0I4C4lwd3BwBGKFxxSIVPT I6UNa2gjTJUsAhCY6iHNGpv7tOEBCVJKPoKLpH0WSE5WBfaT7yV1JLw4MmBSwV3rYBuJhfteK avJNqH1EGA1aLXZWR1jnzaaJhc51Ai3Dus9n7cc04G5Kbf0pQno+V3Vd9gz83qh2EVcbdEG+m b7lk9J8PPC+03qvDkqH58YZguFiGtaj/VFLJKFRvy8X51fs4uWXcBtPn2gY4Oc0zv8uw/77sc +ZE+aycckrtZCAdb1I6QW+NVDb8SOXUyphuXKGohTcMx0SeonEv+WIEN+EbYeXedFHogeZwtk qHLfcsbGzq/Z56bEVBp2CsRBS0VKkibkO1+vKifcJtG/n6d+Yzx6A2+BtvIrwQLxVy+cClo5l +wAJ1LGlrBwUV1SmE310xs7spAbe2LeqllR4TQex9jlAuqZSiagbUyegjygVTuGIq2MmtzBYo QhqriSC74Fnreq+EfzqUb2iq7t9OFFbMNPnJXINBaqCHBoxS3D/jzy2UZBwaQnNgzlLB08Tr7 yPKJ02pr0dMHfLnwp/L2d9uRcU4GmjpqWCGgVYyka4m/ytnP5yG49GfK1HlB35gQBV5iXYJ2V YB0ZtrjSr7fnU1UD9 Received-SPF: pass client-ip=212.227.17.20; envelope-from=n54@gmx.com; helo=mout.gmx.net X-detected-operating-system: by eggs.gnu.org: First seen = 2020/08/11 10:12:03 X-ACL-Warn: Detected OS = Linux 3.11 and newer X-Spam_score_int: -32 X-Spam_score: -3.3 X-Spam_bar: --- X-Spam_report: (-3.3 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kamil Rytarowski , qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" From: Maxime Villard Adds support for the NetBSD Virtual Machine Monitor (NVMM) stubs and introduces the nvmm.h sysemu API for managing the vcpu scheduling and management. Signed-off-by: Maxime Villard Signed-off-by: Kamil Rytarowski Reviewed-by: Sergio Lopez Reviewed-by: Philippe Mathieu-Daudé Tested-by: Jared McNeill --- accel/stubs/Makefile.objs | 1 + accel/stubs/nvmm-stub.c | 43 +++++++++++++++++++++++++++++++++++++++ include/sysemu/nvmm.h | 35 +++++++++++++++++++++++++++++++ 3 files changed, 79 insertions(+) create mode 100644 accel/stubs/nvmm-stub.c create mode 100644 include/sysemu/nvmm.h -- 2.28.0 diff --git a/accel/stubs/Makefile.objs b/accel/stubs/Makefile.objs index bbd14e71fb..38660a0b9b 100644 --- a/accel/stubs/Makefile.objs +++ b/accel/stubs/Makefile.objs @@ -1,6 +1,7 @@ obj-$(call lnot,$(CONFIG_HAX)) += hax-stub.o obj-$(call lnot,$(CONFIG_HVF)) += hvf-stub.o obj-$(call lnot,$(CONFIG_WHPX)) += whpx-stub.o +obj-$(call lnot,$(CONFIG_NVMM)) += nvmm-stub.o obj-$(call lnot,$(CONFIG_KVM)) += kvm-stub.o obj-$(call lnot,$(CONFIG_TCG)) += tcg-stub.o obj-$(call lnot,$(CONFIG_XEN)) += xen-stub.o diff --git a/accel/stubs/nvmm-stub.c b/accel/stubs/nvmm-stub.c new file mode 100644 index 0000000000..c2208b84a3 --- /dev/null +++ b/accel/stubs/nvmm-stub.c @@ -0,0 +1,43 @@ +/* + * Copyright (c) 2018-2019 Maxime Villard, All rights reserved. + * + * NetBSD Virtual Machine Monitor (NVMM) accelerator stub. + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#include "qemu/osdep.h" +#include "qemu-common.h" +#include "cpu.h" +#include "sysemu/nvmm.h" + +int nvmm_init_vcpu(CPUState *cpu) +{ + return -1; +} + +int nvmm_vcpu_exec(CPUState *cpu) +{ + return -1; +} + +void nvmm_destroy_vcpu(CPUState *cpu) +{ +} + +void nvmm_cpu_synchronize_state(CPUState *cpu) +{ +} + +void nvmm_cpu_synchronize_post_reset(CPUState *cpu) +{ +} + +void nvmm_cpu_synchronize_post_init(CPUState *cpu) +{ +} + +void nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu) +{ +} diff --git a/include/sysemu/nvmm.h b/include/sysemu/nvmm.h new file mode 100644 index 0000000000..10496f3980 --- /dev/null +++ b/include/sysemu/nvmm.h @@ -0,0 +1,35 @@ +/* + * Copyright (c) 2018-2019 Maxime Villard, All rights reserved. + * + * NetBSD Virtual Machine Monitor (NVMM) accelerator support. + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#ifndef QEMU_NVMM_H +#define QEMU_NVMM_H + +#include "config-host.h" +#include "qemu-common.h" + +int nvmm_init_vcpu(CPUState *); +int nvmm_vcpu_exec(CPUState *); +void nvmm_destroy_vcpu(CPUState *); + +void nvmm_cpu_synchronize_state(CPUState *); +void nvmm_cpu_synchronize_post_reset(CPUState *); +void nvmm_cpu_synchronize_post_init(CPUState *); +void nvmm_cpu_synchronize_pre_loadvm(CPUState *); + +#ifdef CONFIG_NVMM + +int nvmm_enabled(void); + +#else /* CONFIG_NVMM */ + +#define nvmm_enabled() (0) + +#endif /* CONFIG_NVMM */ + +#endif /* CONFIG_NVMM */ From patchwork Tue Aug 11 14:10:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kamil Rytarowski X-Patchwork-Id: 11709275 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6F2BC739 for ; Tue, 11 Aug 2020 14:14:56 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 45DBE2076C for ; Tue, 11 Aug 2020 14:14:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=gmx.net header.i=@gmx.net header.b="iuROyKxo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 45DBE2076C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=gmx.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:36866 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1k5V3H-0005ZL-Cc for patchwork-qemu-devel@patchwork.kernel.org; Tue, 11 Aug 2020 10:14:55 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:34920) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1k5V0j-0003FZ-8n for qemu-devel@nongnu.org; Tue, 11 Aug 2020 10:12:17 -0400 Received: from mout.gmx.net ([212.227.17.21]:55195) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1k5V0h-00010I-7K for qemu-devel@nongnu.org; Tue, 11 Aug 2020 10:12:16 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1597155118; bh=ZKmjV+nVw1aJ6gZD2QzOEI785zynAMn0St417dViqGo=; h=X-UI-Sender-Class:From:To:Cc:Subject:Date:In-Reply-To:References; b=iuROyKxoEcRyXQr8DcU4KbZuhSElkuMRVgcSe1YK9tequ8thuAx4vniTMmxsBdq4g WyOgH5QjqrVD9RIpMvJfaU1YyCsQhRlnv2YRTaJp7Huo2Pcg92bYmNCKWSxQVq2dZO eClQP3kEJcxsyOE9U+9u/2FDmqz6u9ngq+P5gYvw= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from localhost.localdomain ([89.79.191.25]) by mail.gmx.com (mrgmx105 [212.227.17.174]) with ESMTPSA (Nemesis) id 1MfYPY-1kcNz41Atc-00g0TP; Tue, 11 Aug 2020 16:11:58 +0200 From: Kamil Rytarowski To: rth@twiddle.net, ehabkost@redhat.com, slp@redhat.com, pbonzini@redhat.com, peter.maydell@linaro.org, philmd@redhat.com, max@m00nbsd.net, jmcneill@invisible.ca Subject: [PATCH v5 2/4] Add the NetBSD Virtual Machine Monitor accelerator. Date: Tue, 11 Aug 2020 16:10:47 +0200 Message-Id: <20200811141049.15824-2-n54@gmx.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200811141049.15824-1-n54@gmx.com> References: <20200811141049.15824-1-n54@gmx.com> MIME-Version: 1.0 X-Provags-ID: V03:K1:EBF400qziPdeoWj0QFP6WVI9R5fNOKzwzKBrXgrUIxATGfVGPYR G9A7FjxVbr1YVeepdsha67beeslF5MMlD09Z8QGvabhGtvpGwijY35IFfTS2BF/8BYI3RwS M8IA7NBxv5cWmNVjJ/KMKO/XhK7QsnOaXjLyM+JAjxz0d+mfJrOtIH1u5xti8Yt5ENx6+PN 5JF3Si9eyNPFYBv3MvtNA== X-UI-Out-Filterresults: notjunk:1;V03:K0:gUK/rav6dm8=:vufDX0i0Nod1+maNXyhLkk RWFccA/EVmDnJxYvkXZSR11QtI+gsGfWPwyGXrrD8pMBoXMbK9oHJK8H980dGTgSTOTD7TEJO YcS+ag9tei+wcxdnBFnfd+Xv8TPoe1MlXC0LKvZMBk9OMzs2dpO6W84mn7l9GSNu6Ui9d3gOJ Om22Bq3QYpMxVcwUGJeA+Yy/NcOMukw+RK+te77sbZ7x/cWUPtbO8l8bSjzo308qqNR+yB1Oi 2tenXU5hjG9t+XX+Z7BPHDdlIhsRSLg38Jnq8YcSYVm9Q0nGXnkryQBNYrfLPwGXMXLta/zIX rm3mEkuTW484tLi4LZry80aqJpAMlkC+EvFE+sNInK8cNTpXvXmRnA3ehA+aDB+uumVKQWgqM wzjmPSKnUi1hTSLGTqmTLtEgINgTv/euMh6g+mRyx+4VFrVM2yP4ZOM/9vWOs0y+EizJStv6J PILJFl8/rw1fMmQXNjF8HnK7rMK0KAMcueSJMjHfuKdcE3L43xmfXCGjIZ1e7g5cAFddazuAT GHPba1oNu+lXWeFsRXwGzFa4TNdK+3V8Uq74TQ36qFZ0i0Hk/C/jVAnUI/INC6hHWIWFsXYtf v1ZqAQ/JO4+E5aNpB8nj9aPZlINfttKsWjkjYKmP1Li8gaADEHZZKRYDwNhBxo7WY7BtMRfFD XhoNCim+gT4Kihd+sjLw15qwfy8EthzQGS3tMcOOYZVBcjdBhk+lAMA/OUxCOk4fvm3b7xDQB 9hZPr90CFNvwVuAKruvQfrl4gyZQoXMFaVvh6ZjR2diQFkhit+eeazae3wmD3rfxeG6Q80KC+ 4//djuf+pbx0fvZTytCR1C6gp5/YaJx3WUZPt61WY+jVlc9NcK0Yqsf67H9P2OKrTHuVIuk/7 l80gbmwlz2Gmbw5mTw68ziUZ4E3bw+hVtz+C1kaE6/0ox3QQhsRCyh38P8Xyvb8wNdgi/jjbx TEKCp4CLvJ218+o7o4Ki6E/lSUYA59aAH831sUZ0JrESFbTIuukigZa02DIAgtxyXkbUXkDHP NPhmSBRimZm6jMvb20/zDtJgp7j/j11x3XYsXA47jXJhyo8SlxbxDJjEsjTTcXW/AVZA0nmok piyeXJ59fczfnZJTKqClQDsnG8NxLAKW6dp7mV/1kCpaivH4wSqSNLDb1zmaLKxyMCDzvOJl6 +PW8CsW9ei9iWnHTIfCgCCnPv31KTnjigYRVpKC1onTk57EhSsbYmVnWcaUciSy5/fv9GRGIA 9wko4sWepKTlGiTgL Received-SPF: pass client-ip=212.227.17.21; envelope-from=n54@gmx.com; helo=mout.gmx.net X-detected-operating-system: by eggs.gnu.org: First seen = 2020/08/11 10:12:11 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] X-Spam_score_int: -32 X-Spam_score: -3.3 X-Spam_bar: --- X-Spam_report: (-3.3 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kamil Rytarowski , qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" From: Maxime Villard Introduces the configure support for the new NetBSD Virtual Machine Monitor that allows for hypervisor acceleration from usermode components on the NetBSD platform. Signed-off-by: Maxime Villard Signed-off-by: Kamil Rytarowski Reviewed-by: Sergio Lopez Reviewed-by: Philippe Mathieu-Daudé Tested-by: Jared McNeill --- configure | 37 +++++++++++++++++++++++++++++++++++++ qemu-options.hx | 10 +++++----- 2 files changed, 42 insertions(+), 5 deletions(-) -- 2.28.0 diff --git a/configure b/configure index 2acc4d1465..fb9ffba2bf 100755 --- a/configure +++ b/configure @@ -246,6 +246,17 @@ supported_whpx_target() { return 1 } +supported_nvmm_target() { + test "$nvmm" = "yes" || return 1 + glob "$1" "*-softmmu" || return 1 + case "${1%-softmmu}" in + i386|x86_64) + return 0 + ;; + esac + return 1 +} + supported_target() { case "$1" in *-softmmu) @@ -273,6 +284,7 @@ supported_target() { supported_hax_target "$1" && return 0 supported_hvf_target "$1" && return 0 supported_whpx_target "$1" && return 0 + supported_nvmm_target "$1" && return 0 print_error "TCG disabled, but hardware accelerator not available for '$target'" return 1 } @@ -395,6 +407,7 @@ kvm="no" hax="no" hvf="no" whpx="no" +nvmm="no" rdma="" pvrdma="" gprof="no" @@ -847,6 +860,7 @@ DragonFly) NetBSD) bsd="yes" hax="yes" + nvmm="yes" make="${MAKE-gmake}" audio_drv_list="oss try-sdl" audio_possible_drivers="oss sdl" @@ -1233,6 +1247,10 @@ for opt do ;; --enable-whpx) whpx="yes" ;; + --disable-nvmm) nvmm="no" + ;; + --enable-nvmm) nvmm="yes" + ;; --disable-tcg-interpreter) tcg_interpreter="no" ;; --enable-tcg-interpreter) tcg_interpreter="yes" @@ -1879,6 +1897,7 @@ disabled with --disable-FEATURE, default is enabled if available: hax HAX acceleration support hvf Hypervisor.framework acceleration support whpx Windows Hypervisor Platform acceleration support + nvmm NetBSD Virtual Machine Monitor acceleration support rdma Enable RDMA-based migration pvrdma Enable PVRDMA support vde support for vde network @@ -2965,6 +2984,20 @@ if test "$whpx" != "no" ; then fi fi +########################################## +# NetBSD Virtual Machine Monitor (NVMM) accelerator check +if test "$nvmm" != "no" ; then + if check_include "nvmm.h" ; then + nvmm="yes" + LIBS="-lnvmm $LIBS" + else + if test "$nvmm" = "yes"; then + feature_not_found "NVMM" "NVMM is not available" + fi + nvmm="no" + fi +fi + ########################################## # Sparse probe if test "$sparse" != "no" ; then @@ -6934,6 +6967,7 @@ echo "KVM support $kvm" echo "HAX support $hax" echo "HVF support $hvf" echo "WHPX support $whpx" +echo "NVMM support $nvmm" echo "TCG support $tcg" if test "$tcg" = "yes" ; then echo "TCG debug enabled $debug_tcg" @@ -8332,6 +8366,9 @@ fi if test "$target_aligned_only" = "yes" ; then echo "TARGET_ALIGNED_ONLY=y" >> $config_target_mak fi +if supported_nvmm_target $target; then + echo "CONFIG_NVMM=y" >> $config_target_mak +fi if test "$target_bigendian" = "yes" ; then echo "TARGET_WORDS_BIGENDIAN=y" >> $config_target_mak fi diff --git a/qemu-options.hx b/qemu-options.hx index 708583b4ce..697accaa7e 100644 --- a/qemu-options.hx +++ b/qemu-options.hx @@ -26,7 +26,7 @@ DEF("machine", HAS_ARG, QEMU_OPTION_machine, \ "-machine [type=]name[,prop[=value][,...]]\n" " selects emulated machine ('-machine help' for list)\n" " property accel=accel1[:accel2[:...]] selects accelerator\n" - " supported accelerators are kvm, xen, hax, hvf, whpx or tcg (default: tcg)\n" + " supported accelerators are kvm, xen, hax, hvf, nvmm, whpx or tcg (default: tcg)\n" " vmport=on|off|auto controls emulation of vmport (default: auto)\n" " dump-guest-core=on|off include guest memory in a core dump (default=on)\n" " mem-merge=on|off controls memory merge support (default: on)\n" @@ -58,7 +58,7 @@ SRST ``accel=accels1[:accels2[:...]]`` This is used to enable an accelerator. Depending on the target - architecture, kvm, xen, hax, hvf, whpx or tcg can be available. + architecture, kvm, xen, hax, hvf, nvmm, whpx or tcg can be available. By default, tcg is used. If there is more than one accelerator specified, the next one is used if the previous one fails to initialize. @@ -119,7 +119,7 @@ ERST DEF("accel", HAS_ARG, QEMU_OPTION_accel, "-accel [accel=]accelerator[,prop[=value][,...]]\n" - " select accelerator (kvm, xen, hax, hvf, whpx or tcg; use 'help' for a list)\n" + " select accelerator (kvm, xen, hax, hvf, nvmm, whpx or tcg; use 'help' for a list)\n" " igd-passthru=on|off (enable Xen integrated Intel graphics passthrough, default=off)\n" " kernel-irqchip=on|off|split controls accelerated irqchip support (default=on)\n" " kvm-shadow-mem=size of KVM shadow MMU in bytes\n" @@ -128,8 +128,8 @@ DEF("accel", HAS_ARG, QEMU_OPTION_accel, SRST ``-accel name[,prop=value[,...]]`` This is used to enable an accelerator. Depending on the target - architecture, kvm, xen, hax, hvf, whpx or tcg can be available. By - default, tcg is used. If there is more than one accelerator + architecture, kvm, xen, hax, hvf, nvmm whpx or tcg can be available. + By default, tcg is used. If there is more than one accelerator specified, the next one is used if the previous one fails to initialize. From patchwork Tue Aug 11 14:10:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamil Rytarowski X-Patchwork-Id: 11709285 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BEC07109B for ; Tue, 11 Aug 2020 14:20:51 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 603D520768 for ; Tue, 11 Aug 2020 14:20:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=gmx.net header.i=@gmx.net header.b="b9x8x8lS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 603D520768 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=gmx.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42504 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1k5V90-000885-C9 for patchwork-qemu-devel@patchwork.kernel.org; Tue, 11 Aug 2020 10:20:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:34978) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1k5V14-0003NH-6z for qemu-devel@nongnu.org; Tue, 11 Aug 2020 10:12:38 -0400 Received: from mout.gmx.net ([212.227.17.21]:44069) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1k5V10-00012k-Iy for qemu-devel@nongnu.org; Tue, 11 Aug 2020 10:12:37 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1597155140; bh=aUBrCx4sXxdG+1ILPPgPFZrC8Y73fTamyWqAlWMdfPo=; h=X-UI-Sender-Class:From:To:Cc:Subject:Date:In-Reply-To:References; b=b9x8x8lSxMgQYPwEqP2PtdwYQVW5BvT1BeiI4t9PD47g+RmQRTIDRqOu8Y7bpz52m QqNy5+2qUVviLQFxkwVyIf2bsbvdShZ2mT98Wg9JJfx5x6lMEEfMjy0u4M4KqRYCgt NktujJsPxTmM0jAh++iNrMh8o1mDWlUJQU6Ukv2I= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from localhost.localdomain ([89.79.191.25]) by mail.gmx.com (mrgmx105 [212.227.17.174]) with ESMTPSA (Nemesis) id 1MS3il-1kGGNT1LH1-00TTRc; Tue, 11 Aug 2020 16:12:20 +0200 From: Kamil Rytarowski To: rth@twiddle.net, ehabkost@redhat.com, slp@redhat.com, pbonzini@redhat.com, peter.maydell@linaro.org, philmd@redhat.com, max@m00nbsd.net, jmcneill@invisible.ca Subject: [PATCH v5 3/4] Introduce the NVMM impl Date: Tue, 11 Aug 2020 16:10:48 +0200 Message-Id: <20200811141049.15824-3-n54@gmx.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200811141049.15824-1-n54@gmx.com> References: <20200811141049.15824-1-n54@gmx.com> MIME-Version: 1.0 X-Provags-ID: V03:K1:iH8FItktra7EG6BpZIBizoafleqdla4818+ZOPJp47218FXWqVH Lp5+Z9o2ptFOrVLIfn71WEYysjOCT8RjHmfDPJNcA/k+XMjQMVu5861VfGLj9vbSp0fPDse Xkt4JuDYjy3ZxFv3ZBRfqL4r3GDHslLf/YUWEX7pBseHScpoy0GL3mOPmYny/lWsjSUdGbq h/XaynppNU9wjM0ei+HCQ== X-UI-Out-Filterresults: notjunk:1;V03:K0:Ho3lDuhxCgk=:QjbA7wxZOXpl3aIuLh4HSF ZMOhJzV7UVDpO49V7VqackEttXHY+QIgiwF20JNhDHBpe9MwvRpmAn8ZLBs/KUTozkR1/Dqsm FDAywAMXuCH/5XvDXSmBCPdgOLYLsF2rXQTlEGvkBxuv5s9EAMtZNOnu3xbjqM8f7eCWmwzSl mrp0uP+AAjsW3uySNndwIDtpa34fZo0mkzE7YJFMZbb4T3gEj/klvm5kWVwjZLZJF/u6HxGSW Znur8pHcwbtpGX/xVi6bCV4yJ8tmtdVnQHqWuhdwhjLvuZemGA76vQiTWcDyEvQQWhuD3Drfk i/m8+iUoe75A2a0zCIq4wNQ+gn2wH/Piz/U0o5EJ5eROLO66nNr41VpPTsuXhBEe1O8jc8gAY FEaVvS6XoBTxKB50IK3vOxK6c/QNoWNBjeBxP1JlSYE4p3+aznrPvA+d7paYH0pBc5R/lEsFI tPIx8LaAoOY4H5O/6gdAI8vJ4b/BdipNLniwa5PKQNGfV51UUGj/9C0icDErLOGrH32cXvz83 Vrx4NfY0W72WFnQ57gdegleyGPkxRwhCgGHefER5cC1joArYFQMWPrKwT39xl2pP9v0eoi9Ym SF3xPgUeozNQ43jG2AX6DRpR/vf3wIOld2f/ByznLLZdqM6lCLJr4H19EAelIl/KQ+YZ89c9P HjKfvt5dKrN/EBtHhItof5Au1Ymx71KjaTtvDiHouMfmiTxZCwFAijApaRjf7vzeR2km73Og3 tqrDFraVa6AfdLShEOYKn61OXQ4MeGRvj4Deg75guaFQRnupQjKe4wG5a7ZNDK+nOxbtCu/wK Gn7NNMeFoKW7WaQLEhZ7OkAWzyIYxj3LZSNChSLUpNAkfIYGc6jSx1FxMrkcJ+tU1p8Nnnatw AEhYjgwOOnxVizT2r6aW4kMflLgY6ZgBQgRp/q3Fhf8Xx9PXh2rZd51rgCMi6aifa0S6dqUkr wRGE8kPzsQpa4qZmaP8ceS87szcwROUJaqp34i4Ixm60CVcKTOSGbX7fb+GSKJq70dGpdMtnT lSnctt6zMM7aWrBZfIOLuiqwhdiYj+XWC5qQ8LQliNA7F+THbczsAIIsGzhCq97CGg5nHF9gN tA1IrRDmlzKTT5lvhwEO7/C8Via6kgJwpKw41wOaNTw0FljpF0K3uNTpRXM9DwPowSCzaRCuy sfRV/sPTc3oH5HCr4Wg1VgCK+63VYW7zs4qNkMkFJDwr8Un8VDrEFS+H1/mwte6SwbjdXZLFu p1YLiBatU2LdO+vtb Received-SPF: pass client-ip=212.227.17.21; envelope-from=n54@gmx.com; helo=mout.gmx.net X-detected-operating-system: by eggs.gnu.org: First seen = 2020/08/11 10:12:11 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] X-Spam_score_int: -32 X-Spam_score: -3.3 X-Spam_bar: --- X-Spam_report: (-3.3 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kamil Rytarowski , qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" From: Maxime Villard Implements the NetBSD Virtual Machine Monitor (NVMM) target. Which acts as a hypervisor accelerator for QEMU on the NetBSD platform. This enables QEMU much greater speed over the emulated x86_64 path's that are taken on NetBSD today. Signed-off-by: Maxime Villard Signed-off-by: Kamil Rytarowski Reviewed-by: Sergio Lopez Tested-by: Jared McNeill --- target/i386/Makefile.objs | 1 + target/i386/nvmm-all.c | 1226 +++++++++++++++++++++++++++++++++++++ 2 files changed, 1227 insertions(+) create mode 100644 target/i386/nvmm-all.c -- 2.28.0 diff --git a/target/i386/Makefile.objs b/target/i386/Makefile.objs index 0b93143e27..ff0df68404 100644 --- a/target/i386/Makefile.objs +++ b/target/i386/Makefile.objs @@ -18,6 +18,7 @@ obj-$(CONFIG_HAX) += hax-all.o hax-mem.o hax-posix.o endif obj-$(CONFIG_HVF) += hvf/ obj-$(CONFIG_WHPX) += whpx-all.o +obj-$(CONFIG_NVMM) += nvmm-all.o endif obj-$(CONFIG_SEV) += sev.o obj-$(call lnot,$(CONFIG_SEV)) += sev-stub.o diff --git a/target/i386/nvmm-all.c b/target/i386/nvmm-all.c new file mode 100644 index 0000000000..408f7305b9 --- /dev/null +++ b/target/i386/nvmm-all.c @@ -0,0 +1,1226 @@ +/* + * Copyright (c) 2018-2019 Maxime Villard, All rights reserved. + * + * NetBSD Virtual Machine Monitor (NVMM) accelerator for QEMU. + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#include "qemu/osdep.h" +#include "cpu.h" +#include "exec/address-spaces.h" +#include "exec/ioport.h" +#include "qemu-common.h" +#include "strings.h" +#include "sysemu/accel.h" +#include "sysemu/nvmm.h" +#include "sysemu/runstate.h" +#include "sysemu/sysemu.h" +#include "sysemu/cpus.h" +#include "qemu/main-loop.h" +#include "qemu/error-report.h" +#include "qemu/queue.h" +#include "qapi/error.h" +#include "migration/blocker.h" + +#include + +struct qemu_vcpu { + struct nvmm_vcpu vcpu; + uint8_t tpr; + bool stop; + + /* Window-exiting for INTs/NMIs. */ + bool int_window_exit; + bool nmi_window_exit; + + /* The guest is in an interrupt shadow (POP SS, etc). */ + bool int_shadow; +}; + +struct qemu_machine { + struct nvmm_capability cap; + struct nvmm_machine mach; +}; + +/* -------------------------------------------------------------------------- */ + +static bool nvmm_allowed; +static struct qemu_machine qemu_mach; + +static struct qemu_vcpu * +get_qemu_vcpu(CPUState *cpu) +{ + return (struct qemu_vcpu *)cpu->hax_vcpu; +} + +static struct nvmm_machine * +get_nvmm_mach(void) +{ + return &qemu_mach.mach; +} + +/* -------------------------------------------------------------------------- */ + +static void +nvmm_set_segment(struct nvmm_x64_state_seg *nseg, const SegmentCache *qseg) +{ + uint32_t attrib = qseg->flags; + + nseg->selector = qseg->selector; + nseg->limit = qseg->limit; + nseg->base = qseg->base; + nseg->attrib.type = __SHIFTOUT(attrib, DESC_TYPE_MASK); + nseg->attrib.s = __SHIFTOUT(attrib, DESC_S_MASK); + nseg->attrib.dpl = __SHIFTOUT(attrib, DESC_DPL_MASK); + nseg->attrib.p = __SHIFTOUT(attrib, DESC_P_MASK); + nseg->attrib.avl = __SHIFTOUT(attrib, DESC_AVL_MASK); + nseg->attrib.l = __SHIFTOUT(attrib, DESC_L_MASK); + nseg->attrib.def = __SHIFTOUT(attrib, DESC_B_MASK); + nseg->attrib.g = __SHIFTOUT(attrib, DESC_G_MASK); +} + +static void +nvmm_set_registers(CPUState *cpu) +{ + struct CPUX86State *env = (CPUArchState *)cpu->env_ptr; + struct nvmm_machine *mach = get_nvmm_mach(); + struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu); + struct nvmm_vcpu *vcpu = &qcpu->vcpu; + struct nvmm_x64_state *state = vcpu->state; + uint64_t bitmap; + size_t i; + int ret; + + assert(cpu_is_stopped(cpu) || qemu_cpu_is_self(cpu)); + + /* GPRs. */ + state->gprs[NVMM_X64_GPR_RAX] = env->regs[R_EAX]; + state->gprs[NVMM_X64_GPR_RCX] = env->regs[R_ECX]; + state->gprs[NVMM_X64_GPR_RDX] = env->regs[R_EDX]; + state->gprs[NVMM_X64_GPR_RBX] = env->regs[R_EBX]; + state->gprs[NVMM_X64_GPR_RSP] = env->regs[R_ESP]; + state->gprs[NVMM_X64_GPR_RBP] = env->regs[R_EBP]; + state->gprs[NVMM_X64_GPR_RSI] = env->regs[R_ESI]; + state->gprs[NVMM_X64_GPR_RDI] = env->regs[R_EDI]; +#ifdef TARGET_X86_64 + state->gprs[NVMM_X64_GPR_R8] = env->regs[R_R8]; + state->gprs[NVMM_X64_GPR_R9] = env->regs[R_R9]; + state->gprs[NVMM_X64_GPR_R10] = env->regs[R_R10]; + state->gprs[NVMM_X64_GPR_R11] = env->regs[R_R11]; + state->gprs[NVMM_X64_GPR_R12] = env->regs[R_R12]; + state->gprs[NVMM_X64_GPR_R13] = env->regs[R_R13]; + state->gprs[NVMM_X64_GPR_R14] = env->regs[R_R14]; + state->gprs[NVMM_X64_GPR_R15] = env->regs[R_R15]; +#endif + + /* RIP and RFLAGS. */ + state->gprs[NVMM_X64_GPR_RIP] = env->eip; + state->gprs[NVMM_X64_GPR_RFLAGS] = env->eflags; + + /* Segments. */ + nvmm_set_segment(&state->segs[NVMM_X64_SEG_CS], &env->segs[R_CS]); + nvmm_set_segment(&state->segs[NVMM_X64_SEG_DS], &env->segs[R_DS]); + nvmm_set_segment(&state->segs[NVMM_X64_SEG_ES], &env->segs[R_ES]); + nvmm_set_segment(&state->segs[NVMM_X64_SEG_FS], &env->segs[R_FS]); + nvmm_set_segment(&state->segs[NVMM_X64_SEG_GS], &env->segs[R_GS]); + nvmm_set_segment(&state->segs[NVMM_X64_SEG_SS], &env->segs[R_SS]); + + /* Special segments. */ + nvmm_set_segment(&state->segs[NVMM_X64_SEG_GDT], &env->gdt); + nvmm_set_segment(&state->segs[NVMM_X64_SEG_LDT], &env->ldt); + nvmm_set_segment(&state->segs[NVMM_X64_SEG_TR], &env->tr); + nvmm_set_segment(&state->segs[NVMM_X64_SEG_IDT], &env->idt); + + /* Control registers. */ + state->crs[NVMM_X64_CR_CR0] = env->cr[0]; + state->crs[NVMM_X64_CR_CR2] = env->cr[2]; + state->crs[NVMM_X64_CR_CR3] = env->cr[3]; + state->crs[NVMM_X64_CR_CR4] = env->cr[4]; + state->crs[NVMM_X64_CR_CR8] = qcpu->tpr; + state->crs[NVMM_X64_CR_XCR0] = env->xcr0; + + /* Debug registers. */ + state->drs[NVMM_X64_DR_DR0] = env->dr[0]; + state->drs[NVMM_X64_DR_DR1] = env->dr[1]; + state->drs[NVMM_X64_DR_DR2] = env->dr[2]; + state->drs[NVMM_X64_DR_DR3] = env->dr[3]; + state->drs[NVMM_X64_DR_DR6] = env->dr[6]; + state->drs[NVMM_X64_DR_DR7] = env->dr[7]; + + /* FPU. */ + state->fpu.fx_cw = env->fpuc; + state->fpu.fx_sw = (env->fpus & ~0x3800) | ((env->fpstt & 0x7) << 11); + state->fpu.fx_tw = 0; + for (i = 0; i < 8; i++) { + state->fpu.fx_tw |= (!env->fptags[i]) << i; + } + state->fpu.fx_opcode = env->fpop; + state->fpu.fx_ip.fa_64 = env->fpip; + state->fpu.fx_dp.fa_64 = env->fpdp; + state->fpu.fx_mxcsr = env->mxcsr; + state->fpu.fx_mxcsr_mask = 0x0000FFFF; + assert(sizeof(state->fpu.fx_87_ac) == sizeof(env->fpregs)); + memcpy(state->fpu.fx_87_ac, env->fpregs, sizeof(env->fpregs)); + for (i = 0; i < 16; i++) { + memcpy(&state->fpu.fx_xmm[i].xmm_bytes[0], + &env->xmm_regs[i].ZMM_Q(0), 8); + memcpy(&state->fpu.fx_xmm[i].xmm_bytes[8], + &env->xmm_regs[i].ZMM_Q(1), 8); + } + + /* MSRs. */ + state->msrs[NVMM_X64_MSR_EFER] = env->efer; + state->msrs[NVMM_X64_MSR_STAR] = env->star; +#ifdef TARGET_X86_64 + state->msrs[NVMM_X64_MSR_LSTAR] = env->lstar; + state->msrs[NVMM_X64_MSR_CSTAR] = env->cstar; + state->msrs[NVMM_X64_MSR_SFMASK] = env->fmask; + state->msrs[NVMM_X64_MSR_KERNELGSBASE] = env->kernelgsbase; +#endif + state->msrs[NVMM_X64_MSR_SYSENTER_CS] = env->sysenter_cs; + state->msrs[NVMM_X64_MSR_SYSENTER_ESP] = env->sysenter_esp; + state->msrs[NVMM_X64_MSR_SYSENTER_EIP] = env->sysenter_eip; + state->msrs[NVMM_X64_MSR_PAT] = env->pat; + state->msrs[NVMM_X64_MSR_TSC] = env->tsc; + + bitmap = + NVMM_X64_STATE_SEGS | + NVMM_X64_STATE_GPRS | + NVMM_X64_STATE_CRS | + NVMM_X64_STATE_DRS | + NVMM_X64_STATE_MSRS | + NVMM_X64_STATE_FPU; + + ret = nvmm_vcpu_setstate(mach, vcpu, bitmap); + if (ret == -1) { + error_report("NVMM: Failed to set virtual processor context," + " error=%d", errno); + } +} + +static void +nvmm_get_segment(SegmentCache *qseg, const struct nvmm_x64_state_seg *nseg) +{ + qseg->selector = nseg->selector; + qseg->limit = nseg->limit; + qseg->base = nseg->base; + + qseg->flags = + __SHIFTIN((uint32_t)nseg->attrib.type, DESC_TYPE_MASK) | + __SHIFTIN((uint32_t)nseg->attrib.s, DESC_S_MASK) | + __SHIFTIN((uint32_t)nseg->attrib.dpl, DESC_DPL_MASK) | + __SHIFTIN((uint32_t)nseg->attrib.p, DESC_P_MASK) | + __SHIFTIN((uint32_t)nseg->attrib.avl, DESC_AVL_MASK) | + __SHIFTIN((uint32_t)nseg->attrib.l, DESC_L_MASK) | + __SHIFTIN((uint32_t)nseg->attrib.def, DESC_B_MASK) | + __SHIFTIN((uint32_t)nseg->attrib.g, DESC_G_MASK); +} + +static void +nvmm_get_registers(CPUState *cpu) +{ + struct CPUX86State *env = (CPUArchState *)cpu->env_ptr; + struct nvmm_machine *mach = get_nvmm_mach(); + struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu); + struct nvmm_vcpu *vcpu = &qcpu->vcpu; + X86CPU *x86_cpu = X86_CPU(cpu); + struct nvmm_x64_state *state = vcpu->state; + uint64_t bitmap, tpr; + size_t i; + int ret; + + assert(cpu_is_stopped(cpu) || qemu_cpu_is_self(cpu)); + + bitmap = + NVMM_X64_STATE_SEGS | + NVMM_X64_STATE_GPRS | + NVMM_X64_STATE_CRS | + NVMM_X64_STATE_DRS | + NVMM_X64_STATE_MSRS | + NVMM_X64_STATE_FPU; + + ret = nvmm_vcpu_getstate(mach, vcpu, bitmap); + if (ret == -1) { + error_report("NVMM: Failed to get virtual processor context," + " error=%d", errno); + } + + /* GPRs. */ + env->regs[R_EAX] = state->gprs[NVMM_X64_GPR_RAX]; + env->regs[R_ECX] = state->gprs[NVMM_X64_GPR_RCX]; + env->regs[R_EDX] = state->gprs[NVMM_X64_GPR_RDX]; + env->regs[R_EBX] = state->gprs[NVMM_X64_GPR_RBX]; + env->regs[R_ESP] = state->gprs[NVMM_X64_GPR_RSP]; + env->regs[R_EBP] = state->gprs[NVMM_X64_GPR_RBP]; + env->regs[R_ESI] = state->gprs[NVMM_X64_GPR_RSI]; + env->regs[R_EDI] = state->gprs[NVMM_X64_GPR_RDI]; +#ifdef TARGET_X86_64 + env->regs[R_R8] = state->gprs[NVMM_X64_GPR_R8]; + env->regs[R_R9] = state->gprs[NVMM_X64_GPR_R9]; + env->regs[R_R10] = state->gprs[NVMM_X64_GPR_R10]; + env->regs[R_R11] = state->gprs[NVMM_X64_GPR_R11]; + env->regs[R_R12] = state->gprs[NVMM_X64_GPR_R12]; + env->regs[R_R13] = state->gprs[NVMM_X64_GPR_R13]; + env->regs[R_R14] = state->gprs[NVMM_X64_GPR_R14]; + env->regs[R_R15] = state->gprs[NVMM_X64_GPR_R15]; +#endif + + /* RIP and RFLAGS. */ + env->eip = state->gprs[NVMM_X64_GPR_RIP]; + env->eflags = state->gprs[NVMM_X64_GPR_RFLAGS]; + + /* Segments. */ + nvmm_get_segment(&env->segs[R_ES], &state->segs[NVMM_X64_SEG_ES]); + nvmm_get_segment(&env->segs[R_CS], &state->segs[NVMM_X64_SEG_CS]); + nvmm_get_segment(&env->segs[R_SS], &state->segs[NVMM_X64_SEG_SS]); + nvmm_get_segment(&env->segs[R_DS], &state->segs[NVMM_X64_SEG_DS]); + nvmm_get_segment(&env->segs[R_FS], &state->segs[NVMM_X64_SEG_FS]); + nvmm_get_segment(&env->segs[R_GS], &state->segs[NVMM_X64_SEG_GS]); + + /* Special segments. */ + nvmm_get_segment(&env->gdt, &state->segs[NVMM_X64_SEG_GDT]); + nvmm_get_segment(&env->ldt, &state->segs[NVMM_X64_SEG_LDT]); + nvmm_get_segment(&env->tr, &state->segs[NVMM_X64_SEG_TR]); + nvmm_get_segment(&env->idt, &state->segs[NVMM_X64_SEG_IDT]); + + /* Control registers. */ + env->cr[0] = state->crs[NVMM_X64_CR_CR0]; + env->cr[2] = state->crs[NVMM_X64_CR_CR2]; + env->cr[3] = state->crs[NVMM_X64_CR_CR3]; + env->cr[4] = state->crs[NVMM_X64_CR_CR4]; + tpr = state->crs[NVMM_X64_CR_CR8]; + if (tpr != qcpu->tpr) { + qcpu->tpr = tpr; + cpu_set_apic_tpr(x86_cpu->apic_state, tpr); + } + env->xcr0 = state->crs[NVMM_X64_CR_XCR0]; + + /* Debug registers. */ + env->dr[0] = state->drs[NVMM_X64_DR_DR0]; + env->dr[1] = state->drs[NVMM_X64_DR_DR1]; + env->dr[2] = state->drs[NVMM_X64_DR_DR2]; + env->dr[3] = state->drs[NVMM_X64_DR_DR3]; + env->dr[6] = state->drs[NVMM_X64_DR_DR6]; + env->dr[7] = state->drs[NVMM_X64_DR_DR7]; + + /* FPU. */ + env->fpuc = state->fpu.fx_cw; + env->fpstt = (state->fpu.fx_sw >> 11) & 0x7; + env->fpus = state->fpu.fx_sw & ~0x3800; + for (i = 0; i < 8; i++) { + env->fptags[i] = !((state->fpu.fx_tw >> i) & 1); + } + env->fpop = state->fpu.fx_opcode; + env->fpip = state->fpu.fx_ip.fa_64; + env->fpdp = state->fpu.fx_dp.fa_64; + env->mxcsr = state->fpu.fx_mxcsr; + assert(sizeof(state->fpu.fx_87_ac) == sizeof(env->fpregs)); + memcpy(env->fpregs, state->fpu.fx_87_ac, sizeof(env->fpregs)); + for (i = 0; i < 16; i++) { + memcpy(&env->xmm_regs[i].ZMM_Q(0), + &state->fpu.fx_xmm[i].xmm_bytes[0], 8); + memcpy(&env->xmm_regs[i].ZMM_Q(1), + &state->fpu.fx_xmm[i].xmm_bytes[8], 8); + } + + /* MSRs. */ + env->efer = state->msrs[NVMM_X64_MSR_EFER]; + env->star = state->msrs[NVMM_X64_MSR_STAR]; +#ifdef TARGET_X86_64 + env->lstar = state->msrs[NVMM_X64_MSR_LSTAR]; + env->cstar = state->msrs[NVMM_X64_MSR_CSTAR]; + env->fmask = state->msrs[NVMM_X64_MSR_SFMASK]; + env->kernelgsbase = state->msrs[NVMM_X64_MSR_KERNELGSBASE]; +#endif + env->sysenter_cs = state->msrs[NVMM_X64_MSR_SYSENTER_CS]; + env->sysenter_esp = state->msrs[NVMM_X64_MSR_SYSENTER_ESP]; + env->sysenter_eip = state->msrs[NVMM_X64_MSR_SYSENTER_EIP]; + env->pat = state->msrs[NVMM_X64_MSR_PAT]; + env->tsc = state->msrs[NVMM_X64_MSR_TSC]; + + x86_update_hflags(env); +} + +static bool +nvmm_can_take_int(CPUState *cpu) +{ + struct CPUX86State *env = (CPUArchState *)cpu->env_ptr; + struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu); + struct nvmm_vcpu *vcpu = &qcpu->vcpu; + struct nvmm_machine *mach = get_nvmm_mach(); + + if (qcpu->int_window_exit) { + return false; + } + + if (qcpu->int_shadow || !(env->eflags & IF_MASK)) { + struct nvmm_x64_state *state = vcpu->state; + + /* Exit on interrupt window. */ + nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_INTR); + state->intr.int_window_exiting = 1; + nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_INTR); + + return false; + } + + return true; +} + +static bool +nvmm_can_take_nmi(CPUState *cpu) +{ + struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu); + + /* + * Contrary to INTs, NMIs always schedule an exit when they are + * completed. Therefore, if window-exiting is enabled, it means + * NMIs are blocked. + */ + if (qcpu->nmi_window_exit) { + return false; + } + + return true; +} + +/* + * Called before the VCPU is run. We inject events generated by the I/O + * thread, and synchronize the guest TPR. + */ +static void +nvmm_vcpu_pre_run(CPUState *cpu) +{ + struct CPUX86State *env = (CPUArchState *)cpu->env_ptr; + struct nvmm_machine *mach = get_nvmm_mach(); + struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu); + struct nvmm_vcpu *vcpu = &qcpu->vcpu; + X86CPU *x86_cpu = X86_CPU(cpu); + struct nvmm_x64_state *state = vcpu->state; + struct nvmm_vcpu_event *event = vcpu->event; + bool has_event = false; + bool sync_tpr = false; + uint8_t tpr; + int ret; + + qemu_mutex_lock_iothread(); + + tpr = cpu_get_apic_tpr(x86_cpu->apic_state); + if (tpr != qcpu->tpr) { + qcpu->tpr = tpr; + sync_tpr = true; + } + + /* + * Force the VCPU out of its inner loop to process any INIT requests + * or commit pending TPR access. + */ + if (cpu->interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) { + cpu->exit_request = 1; + } + + if (!has_event && (cpu->interrupt_request & CPU_INTERRUPT_NMI)) { + if (nvmm_can_take_nmi(cpu)) { + cpu->interrupt_request &= ~CPU_INTERRUPT_NMI; + event->type = NVMM_VCPU_EVENT_INTR; + event->vector = 2; + has_event = true; + } + } + + if (!has_event && (cpu->interrupt_request & CPU_INTERRUPT_HARD)) { + if (nvmm_can_take_int(cpu)) { + cpu->interrupt_request &= ~CPU_INTERRUPT_HARD; + event->type = NVMM_VCPU_EVENT_INTR; + event->vector = cpu_get_pic_interrupt(env); + has_event = true; + } + } + + /* Don't want SMIs. */ + if (cpu->interrupt_request & CPU_INTERRUPT_SMI) { + cpu->interrupt_request &= ~CPU_INTERRUPT_SMI; + } + + if (sync_tpr) { + ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_CRS); + if (ret == -1) { + error_report("NVMM: Failed to get CPU state," + " error=%d", errno); + } + + state->crs[NVMM_X64_CR_CR8] = qcpu->tpr; + + ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_CRS); + if (ret == -1) { + error_report("NVMM: Failed to set CPU state," + " error=%d", errno); + } + } + + if (has_event) { + ret = nvmm_vcpu_inject(mach, vcpu); + if (ret == -1) { + error_report("NVMM: Failed to inject event," + " error=%d", errno); + } + } + + qemu_mutex_unlock_iothread(); +} + +/* + * Called after the VCPU ran. We synchronize the host view of the TPR and + * RFLAGS. + */ +static void +nvmm_vcpu_post_run(CPUState *cpu, struct nvmm_vcpu_exit *exit) +{ + struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu); + struct CPUX86State *env = (CPUArchState *)cpu->env_ptr; + X86CPU *x86_cpu = X86_CPU(cpu); + uint64_t tpr; + + env->eflags = exit->exitstate.rflags; + qcpu->int_shadow = exit->exitstate.int_shadow; + qcpu->int_window_exit = exit->exitstate.int_window_exiting; + qcpu->nmi_window_exit = exit->exitstate.nmi_window_exiting; + + tpr = exit->exitstate.cr8; + if (qcpu->tpr != tpr) { + qcpu->tpr = tpr; + qemu_mutex_lock_iothread(); + cpu_set_apic_tpr(x86_cpu->apic_state, qcpu->tpr); + qemu_mutex_unlock_iothread(); + } +} + +/* -------------------------------------------------------------------------- */ + +static void +nvmm_io_callback(struct nvmm_io *io) +{ + MemTxAttrs attrs = { 0 }; + int ret; + + ret = address_space_rw(&address_space_io, io->port, attrs, io->data, + io->size, !io->in); + if (ret != MEMTX_OK) { + error_report("NVMM: I/O Transaction Failed " + "[%s, port=%u, size=%zu]", (io->in ? "in" : "out"), + io->port, io->size); + } + + /* Needed, otherwise infinite loop. */ + current_cpu->vcpu_dirty = false; +} + +static void +nvmm_mem_callback(struct nvmm_mem *mem) +{ + cpu_physical_memory_rw(mem->gpa, mem->data, mem->size, mem->write); + + /* XXX Needed, otherwise infinite loop. */ + current_cpu->vcpu_dirty = false; +} + +static struct nvmm_assist_callbacks nvmm_callbacks = { + .io = nvmm_io_callback, + .mem = nvmm_mem_callback +}; + +/* -------------------------------------------------------------------------- */ + +static int +nvmm_handle_mem(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu) +{ + int ret; + + ret = nvmm_assist_mem(mach, vcpu); + if (ret == -1) { + error_report("NVMM: Mem Assist Failed [gpa=%p]", + (void *)vcpu->exit->u.mem.gpa); + } + + return ret; +} + +static int +nvmm_handle_io(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu) +{ + int ret; + + ret = nvmm_assist_io(mach, vcpu); + if (ret == -1) { + error_report("NVMM: I/O Assist Failed [port=%d]", + (int)vcpu->exit->u.io.port); + } + + return ret; +} + +static int +nvmm_handle_rdmsr(struct nvmm_machine *mach, CPUState *cpu, + struct nvmm_vcpu_exit *exit) +{ + struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu); + struct nvmm_vcpu *vcpu = &qcpu->vcpu; + X86CPU *x86_cpu = X86_CPU(cpu); + struct nvmm_x64_state *state = vcpu->state; + uint64_t val; + int ret; + + switch (exit->u.rdmsr.msr) { + case MSR_IA32_APICBASE: + val = cpu_get_apic_base(x86_cpu->apic_state); + break; + case MSR_MTRRcap: + case MSR_MTRRdefType: + case MSR_MCG_CAP: + case MSR_MCG_STATUS: + val = 0; + break; + default: /* More MSRs to add? */ + val = 0; + error_report("NVMM: Unexpected RDMSR 0x%x, ignored", + exit->u.rdmsr.msr); + break; + } + + ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_GPRS); + if (ret == -1) { + return -1; + } + + state->gprs[NVMM_X64_GPR_RAX] = (val & 0xFFFFFFFF); + state->gprs[NVMM_X64_GPR_RDX] = (val >> 32); + state->gprs[NVMM_X64_GPR_RIP] = exit->u.rdmsr.npc; + + ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_GPRS); + if (ret == -1) { + return -1; + } + + return 0; +} + +static int +nvmm_handle_wrmsr(struct nvmm_machine *mach, CPUState *cpu, + struct nvmm_vcpu_exit *exit) +{ + struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu); + struct nvmm_vcpu *vcpu = &qcpu->vcpu; + X86CPU *x86_cpu = X86_CPU(cpu); + struct nvmm_x64_state *state = vcpu->state; + uint64_t val; + int ret; + + val = exit->u.wrmsr.val; + + switch (exit->u.wrmsr.msr) { + case MSR_IA32_APICBASE: + cpu_set_apic_base(x86_cpu->apic_state, val); + break; + case MSR_MTRRdefType: + case MSR_MCG_STATUS: + break; + default: /* More MSRs to add? */ + error_report("NVMM: Unexpected WRMSR 0x%x [val=0x%lx], ignored", + exit->u.wrmsr.msr, val); + break; + } + + ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_GPRS); + if (ret == -1) { + return -1; + } + + state->gprs[NVMM_X64_GPR_RIP] = exit->u.wrmsr.npc; + + ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_GPRS); + if (ret == -1) { + return -1; + } + + return 0; +} + +static int +nvmm_handle_halted(struct nvmm_machine *mach, CPUState *cpu, + struct nvmm_vcpu_exit *exit) +{ + struct CPUX86State *env = (CPUArchState *)cpu->env_ptr; + int ret = 0; + + qemu_mutex_lock_iothread(); + + if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) && + (env->eflags & IF_MASK)) && + !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) { + cpu->exception_index = EXCP_HLT; + cpu->halted = true; + ret = 1; + } + + qemu_mutex_unlock_iothread(); + + return ret; +} + +static int +nvmm_inject_ud(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu) +{ + struct nvmm_vcpu_event *event = vcpu->event; + + event->type = NVMM_VCPU_EVENT_EXCP; + event->vector = 6; + event->u.excp.error = 0; + + return nvmm_vcpu_inject(mach, vcpu); +} + +static int +nvmm_vcpu_loop(CPUState *cpu) +{ + struct CPUX86State *env = (CPUArchState *)cpu->env_ptr; + struct nvmm_machine *mach = get_nvmm_mach(); + struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu); + struct nvmm_vcpu *vcpu = &qcpu->vcpu; + X86CPU *x86_cpu = X86_CPU(cpu); + struct nvmm_vcpu_exit *exit = vcpu->exit; + int ret; + + /* + * Some asynchronous events must be handled outside of the inner + * VCPU loop. They are handled here. + */ + if (cpu->interrupt_request & CPU_INTERRUPT_INIT) { + nvmm_cpu_synchronize_state(cpu); + do_cpu_init(x86_cpu); + /* set int/nmi windows back to the reset state */ + } + if (cpu->interrupt_request & CPU_INTERRUPT_POLL) { + cpu->interrupt_request &= ~CPU_INTERRUPT_POLL; + apic_poll_irq(x86_cpu->apic_state); + } + if (((cpu->interrupt_request & CPU_INTERRUPT_HARD) && + (env->eflags & IF_MASK)) || + (cpu->interrupt_request & CPU_INTERRUPT_NMI)) { + cpu->halted = false; + } + if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) { + nvmm_cpu_synchronize_state(cpu); + do_cpu_sipi(x86_cpu); + } + if (cpu->interrupt_request & CPU_INTERRUPT_TPR) { + cpu->interrupt_request &= ~CPU_INTERRUPT_TPR; + nvmm_cpu_synchronize_state(cpu); + apic_handle_tpr_access_report(x86_cpu->apic_state, env->eip, + env->tpr_access_type); + } + + if (cpu->halted) { + cpu->exception_index = EXCP_HLT; + atomic_set(&cpu->exit_request, false); + return 0; + } + + qemu_mutex_unlock_iothread(); + cpu_exec_start(cpu); + + /* + * Inner VCPU loop. + */ + do { + if (cpu->vcpu_dirty) { + nvmm_set_registers(cpu); + cpu->vcpu_dirty = false; + } + + if (qcpu->stop) { + cpu->exception_index = EXCP_INTERRUPT; + qcpu->stop = false; + ret = 1; + break; + } + + nvmm_vcpu_pre_run(cpu); + + if (atomic_read(&cpu->exit_request)) { + qemu_cpu_kick_self(); + } + + ret = nvmm_vcpu_run(mach, vcpu); + if (ret == -1) { + error_report("NVMM: Failed to exec a virtual processor," + " error=%d", errno); + break; + } + + nvmm_vcpu_post_run(cpu, exit); + + switch (exit->reason) { + case NVMM_VCPU_EXIT_NONE: + break; + case NVMM_VCPU_EXIT_MEMORY: + ret = nvmm_handle_mem(mach, vcpu); + break; + case NVMM_VCPU_EXIT_IO: + ret = nvmm_handle_io(mach, vcpu); + break; + case NVMM_VCPU_EXIT_INT_READY: + case NVMM_VCPU_EXIT_NMI_READY: + case NVMM_VCPU_EXIT_TPR_CHANGED: + break; + case NVMM_VCPU_EXIT_HALTED: + ret = nvmm_handle_halted(mach, cpu, exit); + break; + case NVMM_VCPU_EXIT_SHUTDOWN: + qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET); + cpu->exception_index = EXCP_INTERRUPT; + ret = 1; + break; + case NVMM_VCPU_EXIT_RDMSR: + ret = nvmm_handle_rdmsr(mach, cpu, exit); + break; + case NVMM_VCPU_EXIT_WRMSR: + ret = nvmm_handle_wrmsr(mach, cpu, exit); + break; + case NVMM_VCPU_EXIT_MONITOR: + case NVMM_VCPU_EXIT_MWAIT: + ret = nvmm_inject_ud(mach, vcpu); + break; + default: + error_report("NVMM: Unexpected VM exit code 0x%lx [hw=0x%lx]", + exit->reason, exit->u.inv.hwcode); + nvmm_get_registers(cpu); + qemu_mutex_lock_iothread(); + qemu_system_guest_panicked(cpu_get_crash_info(cpu)); + qemu_mutex_unlock_iothread(); + ret = -1; + break; + } + } while (ret == 0); + + cpu_exec_end(cpu); + qemu_mutex_lock_iothread(); + current_cpu = cpu; + + atomic_set(&cpu->exit_request, false); + + return ret < 0; +} + +/* -------------------------------------------------------------------------- */ + +static void +do_nvmm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg) +{ + nvmm_get_registers(cpu); + cpu->vcpu_dirty = true; +} + +static void +do_nvmm_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_data arg) +{ + nvmm_set_registers(cpu); + cpu->vcpu_dirty = false; +} + +static void +do_nvmm_cpu_synchronize_post_init(CPUState *cpu, run_on_cpu_data arg) +{ + nvmm_set_registers(cpu); + cpu->vcpu_dirty = false; +} + +static void +do_nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu, run_on_cpu_data arg) +{ + cpu->vcpu_dirty = true; +} + +void nvmm_cpu_synchronize_state(CPUState *cpu) +{ + if (!cpu->vcpu_dirty) { + run_on_cpu(cpu, do_nvmm_cpu_synchronize_state, RUN_ON_CPU_NULL); + } +} + +void nvmm_cpu_synchronize_post_reset(CPUState *cpu) +{ + run_on_cpu(cpu, do_nvmm_cpu_synchronize_post_reset, RUN_ON_CPU_NULL); +} + +void nvmm_cpu_synchronize_post_init(CPUState *cpu) +{ + run_on_cpu(cpu, do_nvmm_cpu_synchronize_post_init, RUN_ON_CPU_NULL); +} + +void nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu) +{ + run_on_cpu(cpu, do_nvmm_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL); +} + +/* -------------------------------------------------------------------------- */ + +static Error *nvmm_migration_blocker; + +static void +nvmm_ipi_signal(int sigcpu) +{ + struct qemu_vcpu *qcpu; + + if (current_cpu) { + qcpu = get_qemu_vcpu(current_cpu); + qcpu->stop = true; + } +} + +static void +nvmm_init_cpu_signals(void) +{ + struct sigaction sigact; + sigset_t set; + + /* Install the IPI handler. */ + memset(&sigact, 0, sizeof(sigact)); + sigact.sa_handler = nvmm_ipi_signal; + sigaction(SIG_IPI, &sigact, NULL); + + /* Allow IPIs on the current thread. */ + sigprocmask(SIG_BLOCK, NULL, &set); + sigdelset(&set, SIG_IPI); + pthread_sigmask(SIG_SETMASK, &set, NULL); +} + +int +nvmm_init_vcpu(CPUState *cpu) +{ + struct nvmm_machine *mach = get_nvmm_mach(); + struct nvmm_vcpu_conf_cpuid cpuid; + struct nvmm_vcpu_conf_tpr tpr; + Error *local_error = NULL; + struct qemu_vcpu *qcpu; + int ret, err; + + nvmm_init_cpu_signals(); + + if (nvmm_migration_blocker == NULL) { + error_setg(&nvmm_migration_blocker, + "NVMM: Migration not supported"); + + (void)migrate_add_blocker(nvmm_migration_blocker, &local_error); + if (local_error) { + error_report_err(local_error); + migrate_del_blocker(nvmm_migration_blocker); + error_free(nvmm_migration_blocker); + return -EINVAL; + } + } + + qcpu = g_malloc0(sizeof(*qcpu)); + if (qcpu == NULL) { + error_report("NVMM: Failed to allocate VCPU context."); + return -ENOMEM; + } + + ret = nvmm_vcpu_create(mach, cpu->cpu_index, &qcpu->vcpu); + if (ret == -1) { + err = errno; + error_report("NVMM: Failed to create a virtual processor," + " error=%d", err); + g_free(qcpu); + return -err; + } + + memset(&cpuid, 0, sizeof(cpuid)); + cpuid.mask = 1; + cpuid.leaf = 0x00000001; + cpuid.u.mask.set.edx = CPUID_MCE | CPUID_MCA | CPUID_MTRR; + ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_CPUID, + &cpuid); + if (ret == -1) { + err = errno; + error_report("NVMM: Failed to configure a virtual processor," + " error=%d", err); + g_free(qcpu); + return -err; + } + + ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_CALLBACKS, + &nvmm_callbacks); + if (ret == -1) { + err = errno; + error_report("NVMM: Failed to configure a virtual processor," + " error=%d", err); + g_free(qcpu); + return -err; + } + + if (qemu_mach.cap.arch.vcpu_conf_support & NVMM_CAP_ARCH_VCPU_CONF_TPR) { + memset(&tpr, 0, sizeof(tpr)); + tpr.exit_changed = 1; + ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_TPR, &tpr); + if (ret == -1) { + err = errno; + error_report("NVMM: Failed to configure a virtual processor," + " error=%d", err); + g_free(qcpu); + return -err; + } + } + + cpu->vcpu_dirty = true; + cpu->hax_vcpu = (struct hax_vcpu_state *)qcpu; + + return 0; +} + +int +nvmm_vcpu_exec(CPUState *cpu) +{ + int ret, fatal; + + while (1) { + if (cpu->exception_index >= EXCP_INTERRUPT) { + ret = cpu->exception_index; + cpu->exception_index = -1; + break; + } + + fatal = nvmm_vcpu_loop(cpu); + + if (fatal) { + error_report("NVMM: Failed to execute a VCPU."); + abort(); + } + } + + return ret; +} + +void +nvmm_destroy_vcpu(CPUState *cpu) +{ + struct nvmm_machine *mach = get_nvmm_mach(); + struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu); + + nvmm_vcpu_destroy(mach, &qcpu->vcpu); + g_free(cpu->hax_vcpu); +} + +/* -------------------------------------------------------------------------- */ + +static void +nvmm_update_mapping(hwaddr start_pa, ram_addr_t size, uintptr_t hva, + bool add, bool rom, const char *name) +{ + struct nvmm_machine *mach = get_nvmm_mach(); + int ret, prot; + + if (add) { + prot = PROT_READ | PROT_EXEC; + if (!rom) { + prot |= PROT_WRITE; + } + ret = nvmm_gpa_map(mach, hva, start_pa, size, prot); + } else { + ret = nvmm_gpa_unmap(mach, hva, start_pa, size); + } + + if (ret == -1) { + error_report("NVMM: Failed to %s GPA range '%s' PA:%p, " + "Size:%p bytes, HostVA:%p, error=%d", + (add ? "map" : "unmap"), name, (void *)(uintptr_t)start_pa, + (void *)size, (void *)hva, errno); + } +} + +static void +nvmm_process_section(MemoryRegionSection *section, int add) +{ + MemoryRegion *mr = section->mr; + hwaddr start_pa = section->offset_within_address_space; + ram_addr_t size = int128_get64(section->size); + unsigned int delta; + uintptr_t hva; + + if (!memory_region_is_ram(mr)) { + return; + } + + /* Adjust start_pa and size so that they are page-aligned. */ + delta = qemu_real_host_page_size - (start_pa & ~qemu_real_host_page_mask); + delta &= ~qemu_real_host_page_mask; + if (delta > size) { + return; + } + start_pa += delta; + size -= delta; + size &= qemu_real_host_page_mask; + if (!size || (start_pa & ~qemu_real_host_page_mask)) { + return; + } + + hva = (uintptr_t)memory_region_get_ram_ptr(mr) + + section->offset_within_region + delta; + + nvmm_update_mapping(start_pa, size, hva, add, + memory_region_is_rom(mr), mr->name); +} + +static void +nvmm_region_add(MemoryListener *listener, MemoryRegionSection *section) +{ + memory_region_ref(section->mr); + nvmm_process_section(section, 1); +} + +static void +nvmm_region_del(MemoryListener *listener, MemoryRegionSection *section) +{ + nvmm_process_section(section, 0); + memory_region_unref(section->mr); +} + +static void +nvmm_transaction_begin(MemoryListener *listener) +{ + /* nothing */ +} + +static void +nvmm_transaction_commit(MemoryListener *listener) +{ + /* nothing */ +} + +static void +nvmm_log_sync(MemoryListener *listener, MemoryRegionSection *section) +{ + MemoryRegion *mr = section->mr; + + if (!memory_region_is_ram(mr)) { + return; + } + + memory_region_set_dirty(mr, 0, int128_get64(section->size)); +} + +static MemoryListener nvmm_memory_listener = { + .begin = nvmm_transaction_begin, + .commit = nvmm_transaction_commit, + .region_add = nvmm_region_add, + .region_del = nvmm_region_del, + .log_sync = nvmm_log_sync, + .priority = 10, +}; + +static void +nvmm_ram_block_added(RAMBlockNotifier *n, void *host, size_t size) +{ + struct nvmm_machine *mach = get_nvmm_mach(); + uintptr_t hva = (uintptr_t)host; + int ret; + + ret = nvmm_hva_map(mach, hva, size); + + if (ret == -1) { + error_report("NVMM: Failed to map HVA, HostVA:%p " + "Size:%p bytes, error=%d", + (void *)hva, (void *)size, errno); + } +} + +static struct RAMBlockNotifier nvmm_ram_notifier = { + .ram_block_added = nvmm_ram_block_added +}; + +/* -------------------------------------------------------------------------- */ + +static void +nvmm_handle_interrupt(CPUState *cpu, int mask) +{ + cpu->interrupt_request |= mask; + + if (!qemu_cpu_is_self(cpu)) { + qemu_cpu_kick(cpu); + } +} + +/* -------------------------------------------------------------------------- */ + +static int +nvmm_accel_init(MachineState *ms) +{ + int ret, err; + + ret = nvmm_init(); + if (ret == -1) { + err = errno; + error_report("NVMM: Initialization failed, error=%d", errno); + return -err; + } + + ret = nvmm_capability(&qemu_mach.cap); + if (ret == -1) { + err = errno; + error_report("NVMM: Unable to fetch capability, error=%d", errno); + return -err; + } + if (qemu_mach.cap.version != NVMM_KERN_VERSION) { + error_report("NVMM: Unsupported version %u", qemu_mach.cap.version); + return -EPROGMISMATCH; + } + if (qemu_mach.cap.state_size != sizeof(struct nvmm_x64_state)) { + error_report("NVMM: Wrong state size %u", qemu_mach.cap.state_size); + return -EPROGMISMATCH; + } + + ret = nvmm_machine_create(&qemu_mach.mach); + if (ret == -1) { + err = errno; + error_report("NVMM: Machine creation failed, error=%d", errno); + return -err; + } + + memory_listener_register(&nvmm_memory_listener, &address_space_memory); + ram_block_notifier_add(&nvmm_ram_notifier); + + cpu_interrupt_handler = nvmm_handle_interrupt; + + printf("NetBSD Virtual Machine Monitor accelerator is operational\n"); + return 0; +} + +int +nvmm_enabled(void) +{ + return nvmm_allowed; +} + +static void +nvmm_accel_class_init(ObjectClass *oc, void *data) +{ + AccelClass *ac = ACCEL_CLASS(oc); + ac->name = "NVMM"; + ac->init_machine = nvmm_accel_init; + ac->allowed = &nvmm_allowed; +} + +static const TypeInfo nvmm_accel_type = { + .name = ACCEL_CLASS_NAME("nvmm"), + .parent = TYPE_ACCEL, + .class_init = nvmm_accel_class_init, +}; + +static void +nvmm_type_init(void) +{ + type_register_static(&nvmm_accel_type); +} + +type_init(nvmm_type_init); From patchwork Tue Aug 11 14:10:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kamil Rytarowski X-Patchwork-Id: 11709277 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7EC3813B1 for ; Tue, 11 Aug 2020 14:15:53 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 41AE320782 for ; Tue, 11 Aug 2020 14:15:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=gmx.net header.i=@gmx.net header.b="AR1KJeLz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 41AE320782 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=gmx.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:37912 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1k5V4B-000652-VS for patchwork-qemu-devel@patchwork.kernel.org; Tue, 11 Aug 2020 10:15:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35028) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1k5V1B-0003U7-8E for qemu-devel@nongnu.org; Tue, 11 Aug 2020 10:12:45 -0400 Received: from mout.gmx.net ([212.227.17.20]:47339) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1k5V18-00013G-6c for qemu-devel@nongnu.org; Tue, 11 Aug 2020 10:12:44 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1597155156; bh=ZQ0DzVPA2iYTtjlzNojomsX+j/smyo/M5PvfBoj6A6E=; h=X-UI-Sender-Class:From:To:Cc:Subject:Date:In-Reply-To:References; b=AR1KJeLz9T1iHHra7vvFip/g4FQInnt141cx3Jzy/1X8JUhYuM67WfCAeQbN9C6hG AzuJaOuH1kHiBY4wkeqslecPllUr5evng8ygsBl823BUabhKMILhiaWTwz7PKYj1QJ AZ8EE0N+fthBnqKXMfoam4dybrxbv36BkZC3JuEs= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from localhost.localdomain ([89.79.191.25]) by mail.gmx.com (mrgmx105 [212.227.17.174]) with ESMTPSA (Nemesis) id 1Mxm3K-1kub413RdO-00zBkX; Tue, 11 Aug 2020 16:12:36 +0200 From: Kamil Rytarowski To: rth@twiddle.net, ehabkost@redhat.com, slp@redhat.com, pbonzini@redhat.com, peter.maydell@linaro.org, philmd@redhat.com, max@m00nbsd.net, jmcneill@invisible.ca Subject: [PATCH v5 4/4] Add the NVMM acceleration enlightenments Date: Tue, 11 Aug 2020 16:10:49 +0200 Message-Id: <20200811141049.15824-4-n54@gmx.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200811141049.15824-1-n54@gmx.com> References: <20200811141049.15824-1-n54@gmx.com> MIME-Version: 1.0 X-Provags-ID: V03:K1:+8t0prrX7UJ+Kdc9oMWCt4RGqikdc3V2wF9h0TXRafSm9p9TZnO AFY0wQa9WeC9hHjy8kF7TWc3LaZks9EhUN2UfbME9Q9PyoXgV7yppfL2zUZM6iwyu5qV5u4 TfDEvuFpEz8F4CCEFrEndkPyzFTKvaekzzdZF3j6ijMloj4xblQJEbzY/caASUMYzdquz1v dCs9zN8vTmPKbjSgXRxGw== X-UI-Out-Filterresults: notjunk:1;V03:K0:pgwPwZ7oMVA=:OzzDzfDQAvko6Nq/moKd+O Ilnt4H4U+2gv9+d/YTsUcwEYDLltLHUpJXpYRkeWCc26wxCfIWF9sb72A251N5yQRs43ksmHX scp6StzNMV6ymh/jxFn42VSiFoLsXzic+rrFzQbgxq1WorAbWFLTj/ERii+I39vHkyJwZdepw Kbk+fdZHLo1UAV1TdGeI9d/RQTfMT7J1sPNdjP+NvD7UGKqWHTWPxSvzdkWL2+FG+StJhhJLN olYk1Zhah+GCDUnY+UY8rSz+PVkGBkYtkrw2/S6epVq+6MxduIvmf07GcaaMEjGAJ8LnKEPIW ygtNZzl2VAz9X1PZPz+LWrX2GUFMFI6fFnUaDemeJlmApaoGQPrW+o7HjP16IA9bAr880BDv2 JDt1C9jTpKYQJGPXJlQPFAv5BAebzciqUBV6XCk0abyhHyqBH7+EcdM+p6hKjvt2Rl92KovgA YrD8Qu6wd16vbd2dOVQlVLPlmqqRYFYFbwE328NI8vv/zmvmwxjfhfZkqP0GMZwpw426Einfl pfvA7oPc3ST3CSoiYyM1TxnyFnBuE0iykHn1zHu03lPCmEiT2O3BwZ+9qBOmX0CMJxBn559cN g1qw7tyEVb8qbFTu2pHbrd8560fdwJemT0LQr/cs8cAsu/68iK5k2zg0BhZ+O5PiCGn6Xui0v I9w1/TTEKlNTZ9TW5kB6TmoaumzZjWBsv5y+UF9UAyyOAVE+hYlDz6hVefWTlUnIhcsm9LSNI IHF0M1GIPacGaADE+9f2shKbmT2G4hFkol1kJzWps5p6FSyMA3+QMPNX2rPOmcukyFSh7sreN SaKUr4xMammtAoc5GV4dVTr6oHkCZjOV2ol3pRSzW07VeuhZ47f9jMcU1A4ICM4lrbMH2cGzm w2U9fTfamqHKo8k2il9sCrL4NHe6cyIRC4l+MwO/aKGVDiiwiCE8G3G3szaKe8QTx4w6b7LeE TwxmlWPQhRFwtX4AGV2k/rOYuHn7PFLNJWWUR+MyVV/NQbkxKL1IvEbzhSb1ZFtmSDG3XgwVM WpF9AucYK8fjn7ujn7BYlRWwNuH8sGZil5L/6VqAb90lhs1AHDhhyGQYcrqT1x7OjQNrh2GjA 2Hv4BY4P3CtrdojZmJjQgRwlOJPZeYIXUwTh+XzhSUWSvkSPF9UuZ0DpMb8iCmYC7TC8dXE6T 1j9d4bICkgQPpGphQVYhkDKU3J0jPBQXClB5zQKPZJ/t0nIv/2owl70cHRpgIDz/fWKRgoLec xhk5GsYgX2j1NahvE Received-SPF: pass client-ip=212.227.17.20; envelope-from=n54@gmx.com; helo=mout.gmx.net X-detected-operating-system: by eggs.gnu.org: First seen = 2020/08/11 10:12:03 X-ACL-Warn: Detected OS = Linux 3.11 and newer X-Spam_score_int: -32 X-Spam_score: -3.3 X-Spam_bar: --- X-Spam_report: (-3.3 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kamil Rytarowski , qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" From: Maxime Villard Implements the NVMM accelerator cpu enlightenments to actually use the nvmm-all accelerator on NetBSD platforms. Signed-off-by: Maxime Villard Signed-off-by: Kamil Rytarowski Reviewed-by: Sergio Lopez Reviewed-by: Philippe Mathieu-Daudé Tested-by: Jared McNeill --- include/sysemu/hw_accel.h | 14 ++++++++++ softmmu/cpus.c | 58 +++++++++++++++++++++++++++++++++++++++ target/i386/helper.c | 2 +- 3 files changed, 73 insertions(+), 1 deletion(-) -- 2.28.0 diff --git a/include/sysemu/hw_accel.h b/include/sysemu/hw_accel.h index e128f8b06b..9e19f5794c 100644 --- a/include/sysemu/hw_accel.h +++ b/include/sysemu/hw_accel.h @@ -16,6 +16,7 @@ #include "sysemu/kvm.h" #include "sysemu/hvf.h" #include "sysemu/whpx.h" +#include "sysemu/nvmm.h" static inline void cpu_synchronize_state(CPUState *cpu) { @@ -31,6 +32,9 @@ static inline void cpu_synchronize_state(CPUState *cpu) if (whpx_enabled()) { whpx_cpu_synchronize_state(cpu); } + if (nvmm_enabled()) { + nvmm_cpu_synchronize_state(cpu); + } } static inline void cpu_synchronize_post_reset(CPUState *cpu) @@ -47,6 +51,10 @@ static inline void cpu_synchronize_post_reset(CPUState *cpu) if (whpx_enabled()) { whpx_cpu_synchronize_post_reset(cpu); } + if (nvmm_enabled()) { + nvmm_cpu_synchronize_post_reset(cpu); + } + } static inline void cpu_synchronize_post_init(CPUState *cpu) @@ -63,6 +71,9 @@ static inline void cpu_synchronize_post_init(CPUState *cpu) if (whpx_enabled()) { whpx_cpu_synchronize_post_init(cpu); } + if (nvmm_enabled()) { + nvmm_cpu_synchronize_post_init(cpu); + } } static inline void cpu_synchronize_pre_loadvm(CPUState *cpu) @@ -79,6 +90,9 @@ static inline void cpu_synchronize_pre_loadvm(CPUState *cpu) if (whpx_enabled()) { whpx_cpu_synchronize_pre_loadvm(cpu); } + if (nvmm_enabled()) { + nvmm_cpu_synchronize_pre_loadvm(cpu); + } } #endif /* QEMU_HW_ACCEL_H */ diff --git a/softmmu/cpus.c b/softmmu/cpus.c index a802e899ab..3b44b92830 100644 --- a/softmmu/cpus.c +++ b/softmmu/cpus.c @@ -43,6 +43,7 @@ #include "sysemu/hax.h" #include "sysemu/hvf.h" #include "sysemu/whpx.h" +#include "sysemu/nvmm.h" #include "exec/exec-all.h" #include "qemu/thread.h" @@ -1621,6 +1622,48 @@ static void *qemu_whpx_cpu_thread_fn(void *arg) return NULL; } +static void *qemu_nvmm_cpu_thread_fn(void *arg) +{ + CPUState *cpu = arg; + int r; + + assert(nvmm_enabled()); + + rcu_register_thread(); + + qemu_mutex_lock_iothread(); + qemu_thread_get_self(cpu->thread); + cpu->thread_id = qemu_get_thread_id(); + current_cpu = cpu; + + r = nvmm_init_vcpu(cpu); + if (r < 0) { + fprintf(stderr, "nvmm_init_vcpu failed: %s\n", strerror(-r)); + exit(1); + } + + /* signal CPU creation */ + cpu->created = true; + qemu_cond_signal(&qemu_cpu_cond); + + do { + if (cpu_can_run(cpu)) { + r = nvmm_vcpu_exec(cpu); + if (r == EXCP_DEBUG) { + cpu_handle_guest_debug(cpu); + } + } + qemu_wait_io_event(cpu); + } while (!cpu->unplug || cpu_can_run(cpu)); + + nvmm_destroy_vcpu(cpu); + cpu->created = false; + qemu_cond_signal(&qemu_cpu_cond); + qemu_mutex_unlock_iothread(); + rcu_unregister_thread(); + return NULL; +} + #ifdef _WIN32 static void CALLBACK dummy_apc_func(ULONG_PTR unused) { @@ -1998,6 +2041,19 @@ static void qemu_whpx_start_vcpu(CPUState *cpu) #endif } +static void qemu_nvmm_start_vcpu(CPUState *cpu) +{ + char thread_name[VCPU_THREAD_NAME_SIZE]; + + cpu->thread = g_malloc0(sizeof(QemuThread)); + cpu->halt_cond = g_malloc0(sizeof(QemuCond)); + qemu_cond_init(cpu->halt_cond); + snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/NVMM", + cpu->cpu_index); + qemu_thread_create(cpu->thread, thread_name, qemu_nvmm_cpu_thread_fn, + cpu, QEMU_THREAD_JOINABLE); +} + static void qemu_dummy_start_vcpu(CPUState *cpu) { char thread_name[VCPU_THREAD_NAME_SIZE]; @@ -2038,6 +2094,8 @@ void qemu_init_vcpu(CPUState *cpu) qemu_tcg_init_vcpu(cpu); } else if (whpx_enabled()) { qemu_whpx_start_vcpu(cpu); + } else if (nvmm_enabled()) { + qemu_nvmm_start_vcpu(cpu); } else { qemu_dummy_start_vcpu(cpu); } diff --git a/target/i386/helper.c b/target/i386/helper.c index 70be53e2c3..c2f1aef65c 100644 --- a/target/i386/helper.c +++ b/target/i386/helper.c @@ -983,7 +983,7 @@ void cpu_report_tpr_access(CPUX86State *env, TPRAccess access) X86CPU *cpu = env_archcpu(env); CPUState *cs = env_cpu(env); - if (kvm_enabled() || whpx_enabled()) { + if (kvm_enabled() || whpx_enabled() || nvmm_enabled()) { env->tpr_access_type = access; cpu_interrupt(cs, CPU_INTERRUPT_TPR);