From patchwork Fri Jan 22 03:20:40 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhang X-Patchwork-Id: 8087551 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 428E8BF0A9 for ; Fri, 22 Jan 2016 03:32:02 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 32BFA20456 for ; Fri, 22 Jan 2016 03:32:01 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 432CB20459 for ; Fri, 22 Jan 2016 03:31:59 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aMSPG-0003DE-5D; Fri, 22 Jan 2016 03:29:02 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aMSPD-0003CN-RQ for xen-devel@lists.xen.org; Fri, 22 Jan 2016 03:29:00 +0000 Received: from [193.109.254.147] by server-9.bemta-14.messagelabs.com id 9F/1D-13475-BF1A1A65; Fri, 22 Jan 2016 03:28:59 +0000 X-Env-Sender: yu.c.zhang@linux.intel.com X-Msg-Ref: server-11.tower-27.messagelabs.com!1453433330!9982694!4 X-Originating-IP: [192.55.52.120] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 48508 invoked from network); 22 Jan 2016 03:28:57 -0000 Received: from mga04.intel.com (HELO mga04.intel.com) (192.55.52.120) by server-11.tower-27.messagelabs.com with SMTP; 22 Jan 2016 03:28:57 -0000 Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP; 21 Jan 2016 19:28:57 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,328,1449561600"; d="scan'208";a="732091415" Received: from zhangyu-xengt.bj.intel.com ([10.238.156.102]) by orsmga003.jf.intel.com with ESMTP; 21 Jan 2016 19:28:55 -0800 From: Yu Zhang To: xen-devel@lists.xen.org Date: Fri, 22 Jan 2016 11:20:40 +0800 Message-Id: <1453432840-5319-4-git-send-email-yu.c.zhang@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1453432840-5319-1-git-send-email-yu.c.zhang@linux.intel.com> References: <1453432840-5319-1-git-send-email-yu.c.zhang@linux.intel.com> Cc: kevin.tian@intel.com, keir@xen.org, ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com, andrew.cooper3@citrix.com, ian.jackson@eu.citrix.com, Paul.Durrant@citrix.com, zhiyuan.lv@intel.com, jbeulich@suse.com, wei.liu2@citrix.com Subject: [Xen-devel] [PATCH v2 3/3] tools: introduce parameter max_wp_ram_ranges. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP A new parameter - max_wp_ram_ranges is added to set the upper limit of write-protected ram ranges to be tracked inside one ioreq server rangeset. Ioreq server uses a group of rangesets to track the I/O or memory resources to be emulated. Default limit of ranges that one rangeset can allocate is set to a small value, due to the fact that these ranges are allocated in xen heap. Yet for the write-protected ram ranges, there are circumstances under which the upper limit inside one rangeset should exceed the default one. E.g. in XenGT, when tracking the per-process graphic translation tables on intel broadwell platforms, the number of page tables concerned will be several thousand(normally in this case, 8192 could be a big enough value). Users who set this item explicitly are supposed to know the specific scenarios that necessitate this configuration. Signed-off-by: Yu Zhang --- docs/man/xl.cfg.pod.5 | 18 ++++++++++++++++++ tools/libxl/libxl.h | 5 +++++ tools/libxl/libxl_dom.c | 3 +++ tools/libxl/libxl_types.idl | 1 + tools/libxl/xl_cmdimpl.c | 4 ++++ xen/arch/x86/hvm/hvm.c | 10 +++++++++- xen/include/public/hvm/params.h | 5 ++++- 7 files changed, 44 insertions(+), 2 deletions(-) diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5 index 8899f75..7634c42 100644 --- a/docs/man/xl.cfg.pod.5 +++ b/docs/man/xl.cfg.pod.5 @@ -962,6 +962,24 @@ FIFO-based event channel ABI support up to 131,071 event channels. Other guests are limited to 4095 (64-bit x86 and ARM) or 1023 (32-bit x86). +=item B + +Limit the maximum write-protected ram ranges that can be tracked +inside one ioreq server rangeset. + +Ioreq server uses a group of rangesets to track the I/O or memory +resources to be emulated. Default limit of ranges that one rangeset +can allocate is set to a small value, due to the fact that these ranges +are allocated in xen heap. Yet for the write-protected ram ranges, +there are circumstances under which the upper limit inside one rangeset +should exceed the default one. E.g. in XenGT, when tracking the per- +process graphic translation tables on intel broadwell platforms, the +number of page tables concerned will be several thousand(normally +in this case, 8192 could be a big enough value). Not configuring this +item, or setting its value to 0 will result in the upper limit set +to its default one. Users who set his item explicitly are supposed +to know the specific scenarios that necessitate this configuration. + =back =head2 Paravirtualised (PV) Guest Specific Options diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h index 156c0d5..6698d72 100644 --- a/tools/libxl/libxl.h +++ b/tools/libxl/libxl.h @@ -136,6 +136,11 @@ #define LIBXL_HAVE_BUILDINFO_EVENT_CHANNELS 1 /* + * libxl_domain_build_info has the u.hvm.max_wp_ram_ranges field. + */ +#define LIBXL_HAVE_BUILDINFO_HVM_MAX_WP_RAM_RANGES 1 + +/* * libxl_domain_build_info has the u.hvm.ms_vm_genid field. */ #define LIBXL_HAVE_BUILDINFO_HVM_MS_VM_GENID 1 diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c index 2269998..54173cb 100644 --- a/tools/libxl/libxl_dom.c +++ b/tools/libxl/libxl_dom.c @@ -288,6 +288,9 @@ static void hvm_set_conf_params(xc_interface *handle, uint32_t domid, libxl_defbool_val(info->u.hvm.nested_hvm)); xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M, libxl_defbool_val(info->u.hvm.altp2m)); + if (info->u.hvm.max_wp_ram_ranges > 0) + xc_hvm_param_set(handle, domid, HVM_PARAM_MAX_WP_RAM_RANGES, + info->u.hvm.max_wp_ram_ranges); } int libxl__build_pre(libxl__gc *gc, uint32_t domid, diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index 9ad7eba..c7d7b5f 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -518,6 +518,7 @@ libxl_domain_build_info = Struct("domain_build_info",[ ("serial_list", libxl_string_list), ("rdm", libxl_rdm_reserve), ("rdm_mem_boundary_memkb", MemKB), + ("max_wp_ram_ranges", uint32), ])), ("pv", Struct(None, [("kernel", string), ("slack_memkb", MemKB), diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c index 25507c7..8bb7cc7 100644 --- a/tools/libxl/xl_cmdimpl.c +++ b/tools/libxl/xl_cmdimpl.c @@ -1626,6 +1626,10 @@ static void parse_config_data(const char *config_source, if (!xlu_cfg_get_long (config, "rdm_mem_boundary", &l, 0)) b_info->u.hvm.rdm_mem_boundary_memkb = l * 1024; + + if (!xlu_cfg_get_long (config, "max_wp_ram_ranges", &l, 0)) + b_info->u.hvm.max_wp_ram_ranges = l; + break; case LIBXL_DOMAIN_TYPE_PV: { diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 53d38e7..fd2b697 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -940,6 +940,10 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, { unsigned int i; int rc; + unsigned int max_wp_ram_ranges = + ( s->domain->arch.hvm_domain.params[HVM_PARAM_MAX_WP_RAM_RANGES] > 0 ) ? + s->domain->arch.hvm_domain.params[HVM_PARAM_MAX_WP_RAM_RANGES] : + MAX_NR_IO_RANGES; if ( is_default ) goto done; @@ -962,7 +966,10 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, if ( !s->range[i] ) goto fail; - rangeset_limit(s->range[i], MAX_NR_IO_RANGES); + if ( i == HVMOP_IO_RANGE_WP_MEM ) + rangeset_limit(s->range[i], max_wp_ram_ranges); + else + rangeset_limit(s->range[i], MAX_NR_IO_RANGES); } done: @@ -6009,6 +6016,7 @@ static int hvm_allow_set_param(struct domain *d, case HVM_PARAM_IOREQ_SERVER_PFN: case HVM_PARAM_NR_IOREQ_SERVER_PAGES: case HVM_PARAM_ALTP2M: + case HVM_PARAM_MAX_WP_RAM_RANGES: if ( value != 0 && a->value != value ) rc = -EEXIST; break; diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h index 81f9451..ab3b11d 100644 --- a/xen/include/public/hvm/params.h +++ b/xen/include/public/hvm/params.h @@ -210,6 +210,9 @@ /* Boolean: Enable altp2m */ #define HVM_PARAM_ALTP2M 35 -#define HVM_NR_PARAMS 36 +/* Max write-protected ram ranges to be tracked in one ioreq server rangeset */ +#define HVM_PARAM_MAX_WP_RAM_RANGES 36 + +#define HVM_NR_PARAMS 37 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */