From patchwork Mon Sep 11 04:38:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haozhong Zhang X-Patchwork-Id: 9946595 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0040D6035D for ; Mon, 11 Sep 2017 04:41:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EA377289DE for ; Mon, 11 Sep 2017 04:41:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DF24028AD7; Mon, 11 Sep 2017 04:41:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 272C7289DE for ; Mon, 11 Sep 2017 04:41:58 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1drGVj-0002UG-DJ; Mon, 11 Sep 2017 04:39:51 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1drGVi-0002Mv-KO for xen-devel@lists.xen.org; Mon, 11 Sep 2017 04:39:50 +0000 Received: from [193.109.254.147] by server-5.bemta-6.messagelabs.com id 0D/AB-03454-69316B95; Mon, 11 Sep 2017 04:39:50 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpkkeJIrShJLcpLzFFi42Jpa+sQ0Z0ivC3 SYOkyfYslHxezODB6HN39mymAMYo1My8pvyKBNWPjuRXsBYfNKu4db2FuYFys0cXIxSEkMJ1R 4tT2d+xdjJwcEgK8EkeWzWCFsAMkZh78zgJR1Msose7YVbAEm4C+xIrHB8FsEQFpiWufLzOC2 MwCfxgl7n2SBbGFBSIkvq0/C1bDIqAqsWPZUzYQm1fATmLj5n/MEAvkJXa1XQSr4QSKH3z5Di wuJGArseD0AtYJjLwLGBlWMWoUpxaVpRbpGpnrJRVlpmeU5CZm5ugaGpjp5aYWFyemp+YkJhX rJefnbmIEBgQDEOxgXLw28BCjJAeTkijvu+NbIoX4kvJTKjMSizPii0pzUosPMcpwcChJ8KoI bYsUEixKTU+tSMvMAYYmTFqCg0dJhDcKJM1bXJCYW5yZDpE6xWjMcWzT5T9MHB037/5hEmLJy 89LlRLnlQMpFQApzSjNgxsEi5lLjLJSwryMQKcJ8RSkFuVmlqDKv2IU52BUEuaNAJnCk5lXAr fvFdApTECn8FzaAnJKSSJCSqqBMfdTU7vixZefzkpdWP7hf/fRBwv6r67Pf6jtlrmqLjB8iXa ClIHH3+6Pz+YuqXr7Y1a8rN2CwksTAn/cfsB8YE5/5plGr/3VPr8mRj53DL3n5b70Z+PLFfv7 26bvviT3vOXM+i9Zvw7veD3XZm/z5nbRSakTQ21ypTiNuzZunnQm4IjQmTf52neUWIozEg21m IuKEwHi2ZiplAIAAA== X-Env-Sender: haozhong.zhang@intel.com X-Msg-Ref: server-2.tower-27.messagelabs.com!1505104735!56506342!24 X-Originating-IP: [134.134.136.20] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 50804 invoked from network); 11 Sep 2017 04:39:48 -0000 Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20) by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 11 Sep 2017 04:39:48 -0000 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Sep 2017 21:39:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.42,376,1500966000"; d="scan'208"; a="1217078548" Received: from hz-desktop.sh.intel.com (HELO localhost) ([10.239.159.142]) by fmsmga002.fm.intel.com with ESMTP; 10 Sep 2017 21:39:46 -0700 From: Haozhong Zhang To: xen-devel@lists.xen.org Date: Mon, 11 Sep 2017 12:38:10 +0800 Message-Id: <20170911043820.14617-30-haozhong.zhang@intel.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170911043820.14617-1-haozhong.zhang@intel.com> References: <20170911043820.14617-1-haozhong.zhang@intel.com> Cc: Haozhong Zhang , Wei Liu , Ian Jackson , Chao Peng , Dan Williams Subject: [Xen-devel] [RFC XEN PATCH v3 29/39] tools: reserve guest memory for ACPI from device model X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Some virtual devices (e.g. NVDIMM) require complex ACPI tables and definition blocks (in AML), which a device model (e.g. QEMU) has already been able to construct. Instead of introducing the redundant implementation to Xen, we would like to reuse the device model to construct those ACPI stuffs. This commit allows Xen to reserve an area in the guest memory for the device model to pass its ACPI tables and definition blocks to guest, which will be loaded by hvmloader. The base guest physical address and the size of the reserved area are passed to the device model via XenStore keys hvmloader/dm-acpi/{address, length}. An xl config "dm_acpi_pages = N" is added to specify the number of reserved guest memory pages. Signed-off-by: Haozhong Zhang --- Cc: Ian Jackson Cc: Wei Liu --- tools/libxc/include/xc_dom.h | 1 + tools/libxc/xc_dom_x86.c | 13 +++++++++++++ tools/libxl/libxl_dom.c | 25 +++++++++++++++++++++++++ tools/libxl/libxl_types.idl | 1 + tools/xl/xl_parse.c | 17 ++++++++++++++++- xen/include/public/hvm/hvm_xs_strings.h | 8 ++++++++ 6 files changed, 64 insertions(+), 1 deletion(-) diff --git a/tools/libxc/include/xc_dom.h b/tools/libxc/include/xc_dom.h index ce47058c41..7c541576e7 100644 --- a/tools/libxc/include/xc_dom.h +++ b/tools/libxc/include/xc_dom.h @@ -93,6 +93,7 @@ struct xc_dom_image { struct xc_dom_seg pgtables_seg; struct xc_dom_seg devicetree_seg; struct xc_dom_seg start_info_seg; /* HVMlite only */ + struct xc_dom_seg dm_acpi_seg; /* reserved PFNs for DM ACPI */ xen_pfn_t start_info_pfn; xen_pfn_t console_pfn; xen_pfn_t xenstore_pfn; diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c index cb68efcbd3..8755350295 100644 --- a/tools/libxc/xc_dom_x86.c +++ b/tools/libxc/xc_dom_x86.c @@ -674,6 +674,19 @@ static int alloc_magic_pages_hvm(struct xc_dom_image *dom) ioreq_server_pfn(0)); xc_hvm_param_set(xch, domid, HVM_PARAM_NR_IOREQ_SERVER_PAGES, NR_IOREQ_SERVER_PAGES); + + if ( dom->dm_acpi_seg.pages ) + { + size_t acpi_size = dom->dm_acpi_seg.pages * XC_DOM_PAGE_SIZE(dom); + + rc = xc_dom_alloc_segment(dom, &dom->dm_acpi_seg, "DM ACPI", + 0, acpi_size); + if ( rc != 0 ) + { + DOMPRINTF("Unable to reserve memory for DM ACPI"); + goto out; + } + } } rc = xc_dom_alloc_segment(dom, &dom->start_info_seg, diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c index f54fd49a73..bad1719892 100644 --- a/tools/libxl/libxl_dom.c +++ b/tools/libxl/libxl_dom.c @@ -897,6 +897,29 @@ static int hvm_build_set_xs_values(libxl__gc *gc, goto err; } + if (dom->dm_acpi_seg.pages) { + uint64_t guest_addr_out = dom->dm_acpi_seg.pfn * XC_DOM_PAGE_SIZE(dom); + + if (guest_addr_out >= 0x100000000ULL) { + LOG(ERROR, + "Guest address of DM ACPI is 0x%"PRIx64", but expected below 4G", + guest_addr_out); + goto err; + } + + path = GCSPRINTF("/local/domain/%d/"HVM_XS_DM_ACPI_ADDRESS, domid); + ret = libxl__xs_printf(gc, XBT_NULL, path, "0x%"PRIx64, guest_addr_out); + if (ret) + goto err; + + path = GCSPRINTF("/local/domain/%d/"HVM_XS_DM_ACPI_LENGTH, domid); + ret = libxl__xs_printf(gc, XBT_NULL, path, "0x%"PRIx64, + (uint64_t)(dom->dm_acpi_seg.pages * + XC_DOM_PAGE_SIZE(dom))); + if (ret) + goto err; + } + return 0; err: @@ -1184,6 +1207,8 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid, dom->vnode_to_pnode[i] = info->vnuma_nodes[i].pnode; } + dom->dm_acpi_seg.pages = info->u.hvm.dm_acpi_pages; + rc = libxl__build_dom(gc, domid, info, state, dom); if (rc != 0) goto out; diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index 173d70acec..4acc0457f4 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -565,6 +565,7 @@ libxl_domain_build_info = Struct("domain_build_info",[ ("rdm", libxl_rdm_reserve), ("rdm_mem_boundary_memkb", MemKB), ("mca_caps", uint64), + ("dm_acpi_pages", integer), ])), ("pv", Struct(None, [("kernel", string), ("slack_memkb", MemKB), diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c index 02ddd2e90d..ed562a1956 100644 --- a/tools/xl/xl_parse.c +++ b/tools/xl/xl_parse.c @@ -810,7 +810,7 @@ void parse_config_data(const char *config_source, libxl_domain_config *d_config) { const char *buf; - long l, vcpus = 0; + long l, vcpus = 0, nr_dm_acpi_pages; XLU_Config *config; XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids, *vtpms, *usbctrls, *usbdevs, *p9devs; @@ -1929,6 +1929,21 @@ skip_usbdev: #undef parse_extra_args + if (b_info->type == LIBXL_DOMAIN_TYPE_HVM && + b_info->device_model_version != LIBXL_DEVICE_MODEL_VERSION_NONE) { + /* parse 'dm_acpi_pages' */ + e = xlu_cfg_get_long(config, "dm_acpi_pages", &nr_dm_acpi_pages, 0); + if (e && e != ESRCH) { + fprintf(stderr, "ERROR: unable to parse dm_acpi_pages.\n"); + exit(-ERROR_FAIL); + } + if (!e && nr_dm_acpi_pages <= 0) { + fprintf(stderr, "ERROR: require positive dm_acpi_pages.\n"); + exit(-ERROR_FAIL); + } + b_info->u.hvm.dm_acpi_pages = nr_dm_acpi_pages; + } + /* If we've already got vfb=[] for PV guest then ignore top level * VNC config. */ if (c_info->type == LIBXL_DOMAIN_TYPE_PV && !d_config->num_vfbs) { diff --git a/xen/include/public/hvm/hvm_xs_strings.h b/xen/include/public/hvm/hvm_xs_strings.h index fea1dd4407..9f04ff2adc 100644 --- a/xen/include/public/hvm/hvm_xs_strings.h +++ b/xen/include/public/hvm/hvm_xs_strings.h @@ -80,4 +80,12 @@ */ #define HVM_XS_OEM_STRINGS "bios-strings/oem-%d" +/* If a range of guest memory is reserved to pass ACPI from the device + * model (e.g. QEMU), the start address and the size of the reserved + * guest memory are specified by following two xenstore values. + */ +#define HVM_XS_DM_ACPI_ROOT "hvmloader/dm-acpi" +#define HVM_XS_DM_ACPI_ADDRESS HVM_XS_DM_ACPI_ROOT"/address" +#define HVM_XS_DM_ACPI_LENGTH HVM_XS_DM_ACPI_ROOT"/length" + #endif /* __XEN_PUBLIC_HVM_HVM_XS_STRINGS_H__ */