From patchwork Tue Dec 29 11:28:47 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haozhong Zhang X-Patchwork-Id: 7929761 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id BD9C8BEEE5 for ; Tue, 29 Dec 2015 11:33:43 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CCDCE20221 for ; Tue, 29 Dec 2015 11:33:42 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C6038201EC for ; Tue, 29 Dec 2015 11:33:41 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aDsT5-000116-LU; Tue, 29 Dec 2015 11:29:31 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aDsT4-00010t-KZ for xen-devel@lists.xensource.com; Tue, 29 Dec 2015 11:29:30 +0000 Received: from [85.158.137.68] by server-12.bemta-3.messagelabs.com id BB/51-14900-99E62865; Tue, 29 Dec 2015 11:29:29 +0000 X-Env-Sender: haozhong.zhang@intel.com X-Msg-Ref: server-6.tower-31.messagelabs.com!1451388566!13090761!1 X-Originating-IP: [192.55.52.115] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 24480 invoked from network); 29 Dec 2015 11:29:27 -0000 Received: from mga14.intel.com (HELO mga14.intel.com) (192.55.52.115) by server-6.tower-31.messagelabs.com with SMTP; 29 Dec 2015 11:29:27 -0000 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP; 29 Dec 2015 03:29:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,495,1444719600"; d="scan'208";a="880366879" Received: from hz-desktop.sh.intel.com (HELO localhost) ([10.239.13.40]) by orsmga002.jf.intel.com with ESMTP; 29 Dec 2015 03:29:23 -0800 From: Haozhong Zhang To: xen-devel@lists.xensource.com Date: Tue, 29 Dec 2015 19:28:47 +0800 Message-Id: <1451388527-18009-3-git-send-email-haozhong.zhang@intel.com> X-Mailer: git-send-email 2.4.8 In-Reply-To: <1451388527-18009-1-git-send-email-haozhong.zhang@intel.com> References: <1451388527-18009-1-git-send-email-haozhong.zhang@intel.com> Cc: Haozhong Zhang , Xiao Guangrong , Eduardo Habkost , "Michael S. Tsirkin" , Stefano Stabellini , Paolo Bonzini , Igor Mammedov , Richard Henderson Subject: [Xen-devel] [PATCH 2/2] pc-nvdimm acpi: build ACPI tables for pc-nvdimm devices X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Reuse existing NVDIMM ACPI code to build ACPI tables for pc-nvdimm devices. The resulting tables are then copied into Xen guest domain so tha they can be later loaded by Xen hvmloader. Signed-off-by: Haozhong Zhang --- hw/acpi/nvdimm.c | 5 +++- hw/i386/pc.c | 6 ++++- include/hw/xen/xen.h | 2 ++ xen-hvm.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 82 insertions(+), 2 deletions(-) diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c index df1b176..7c4b931 100644 --- a/hw/acpi/nvdimm.c +++ b/hw/acpi/nvdimm.c @@ -29,12 +29,15 @@ #include "hw/acpi/acpi.h" #include "hw/acpi/aml-build.h" #include "hw/mem/nvdimm.h" +#include "hw/mem/pc-nvdimm.h" +#include "hw/xen/xen.h" static int nvdimm_plugged_device_list(Object *obj, void *opaque) { GSList **list = opaque; + const char *type_name = xen_enabled() ? TYPE_PC_NVDIMM : TYPE_NVDIMM; - if (object_dynamic_cast(obj, TYPE_NVDIMM)) { + if (object_dynamic_cast(obj, type_name)) { DeviceState *dev = DEVICE(obj); if (dev->realized) { /* only realized NVDIMMs matter */ diff --git a/hw/i386/pc.c b/hw/i386/pc.c index 459260b..fadacf5 100644 --- a/hw/i386/pc.c +++ b/hw/i386/pc.c @@ -1186,7 +1186,11 @@ void pc_guest_info_machine_done(Notifier *notifier, void *data) } } - acpi_setup(&guest_info_state->info); + if (!xen_enabled()) { + acpi_setup(&guest_info_state->info); + } else if (xen_hvm_acpi_setup(PC_MACHINE(qdev_get_machine()))) { + error_report("Warning: failed to initialize Xen HVM ACPI tables"); + } } PcGuestInfo *pc_guest_info_init(PCMachineState *pcms) diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h index e90931a..8b705e1 100644 --- a/include/hw/xen/xen.h +++ b/include/hw/xen/xen.h @@ -51,4 +51,6 @@ void xen_register_framebuffer(struct MemoryRegion *mr); # define HVM_MAX_VCPUS 32 #endif +int xen_hvm_acpi_setup(PCMachineState *pcms); + #endif /* QEMU_HW_XEN_H */ diff --git a/xen-hvm.c b/xen-hvm.c index 6ebf43f..f1f5e77 100644 --- a/xen-hvm.c +++ b/xen-hvm.c @@ -26,6 +26,13 @@ #include #include +#include "qemu/error-report.h" +#include "hw/acpi/acpi.h" +#include "hw/acpi/aml-build.h" +#include "hw/acpi/bios-linker-loader.h" +#include "hw/mem/nvdimm.h" +#include "hw/mem/pc-nvdimm.h" + //#define DEBUG_XEN_HVM #ifdef DEBUG_XEN_HVM @@ -1330,6 +1337,70 @@ int xen_hvm_init(PCMachineState *pcms, return 0; } +int xen_hvm_acpi_setup(PCMachineState *pcms) +{ + AcpiBuildTables *hvm_acpi_tables; + GArray *tables_blob, *table_offsets; + + ram_addr_t acpi_tables_addr, acpi_tables_size; + void *host; + + struct xs_handle *xs = NULL; + char path[80], value[17]; + + if (!pcms->nvdimm) { + return 0; + } + + hvm_acpi_tables = g_malloc0(sizeof(AcpiBuildTables)); + if (!hvm_acpi_tables) { + return -1; + } + acpi_build_tables_init(hvm_acpi_tables); + tables_blob = hvm_acpi_tables->table_data; + table_offsets = g_array_new(false, true, sizeof(uint32_t)); + bios_linker_loader_alloc(hvm_acpi_tables->linker, + ACPI_BUILD_TABLE_FILE, 64, false); + + /* build NFIT tables */ + nvdimm_build_acpi(table_offsets, tables_blob, hvm_acpi_tables->linker); + g_array_free(table_offsets, true); + + /* copy APCI tables into VM */ + acpi_tables_size = tables_blob->len; + acpi_tables_addr = + (pcms->below_4g_mem_size - acpi_tables_size) & XC_PAGE_MASK; + host = xc_map_foreign_range(xen_xc, xen_domid, + ROUND_UP(acpi_tables_size, XC_PAGE_SIZE), + PROT_READ | PROT_WRITE, + acpi_tables_addr >> XC_PAGE_SHIFT); + memcpy(host, tables_blob->data, acpi_tables_size); + munmap(host, ROUND_UP(acpi_tables_size, XC_PAGE_SIZE)); + + /* write address and size of ACPI tables to xenstore */ + xs = xs_open(0); + if (xs == NULL) { + error_report("could not contact XenStore\n"); + return -1; + } + snprintf(path, sizeof(path), + "/local/domain/%d/hvmloader/dm-acpi/address", xen_domid); + snprintf(value, sizeof(value), "%"PRIu64, (uint64_t) acpi_tables_addr); + if (!xs_write(xs, 0, path, value, strlen(value))) { + error_report("failed to write NFIT base address to xenstore\n"); + return -1; + } + snprintf(path, sizeof(path), + "/local/domain/%d/hvmloader/dm-acpi/length", xen_domid); + snprintf(value, sizeof(value), "%"PRIu64, (uint64_t) acpi_tables_size); + if (!xs_write(xs, 0, path, value, strlen(value))) { + error_report("failed to write NFIT size to xenstore\n"); + return -1; + } + + return 0; +} + void destroy_hvm_domain(bool reboot) { XenXC xc_handle;