From patchwork Fri Aug 2 15:36:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anthony PERARD X-Patchwork-Id: 11073813 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E79361399 for ; Fri, 2 Aug 2019 15:59:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D83B728872 for ; Fri, 2 Aug 2019 15:59:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CCB3B28877; Fri, 2 Aug 2019 15:59:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BA9B828872 for ; Fri, 2 Aug 2019 15:59:29 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1htZw8-0005lY-MX; Fri, 02 Aug 2019 15:57:44 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1htZw7-0005lT-Mk for xen-devel@lists.xenproject.org; Fri, 02 Aug 2019 15:57:43 +0000 X-Inumbo-ID: 41901864-b53e-11e9-a0ec-5f7986644955 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 41901864-b53e-11e9-a0ec-5f7986644955; Fri, 02 Aug 2019 15:57:40 +0000 (UTC) Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=anthony.perard@citrix.com; spf=Pass smtp.mailfrom=anthony.perard@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of anthony.perard@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="anthony.perard@citrix.com"; x-sender="anthony.perard@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of anthony.perard@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="anthony.perard@citrix.com"; x-sender="anthony.perard@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="anthony.perard@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: y3W7NMF+KWLERxRDL3PmPmo5OWlijjhHKY1LRJxtxtKonnU1JVAu3HdHrbj36KlCY5e7sDHUY+ mFAjN0z/JdI5f5oqjz7moFvqZ+L1yaqxnJa+KBGVjV+5IB8Vgk8az5kZE+EMUrhftOGm8WoToP dVlT8x+N40IHOFATNWXLNaS3+hSdKjVJ1VAYsWh1taRnfTGV7Y72t0FcoxlTupSjgStZ+HLnfq Pww/5BT7r4vFkVmrgEYMo7ThboPU5FGwWVUdYCeL4+tmQtG5FjMkCsBD1dJR7gqcoD83W0tEg2 mv0= X-SBRS: 2.7 X-MesageID: 3911343 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,338,1559534400"; d="scan'208";a="3911343" From: Anthony PERARD To: Date: Fri, 2 Aug 2019 16:36:05 +0100 Message-ID: <20190802153606.32061-35-anthony.perard@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190802153606.32061-1-anthony.perard@citrix.com> References: <20190802153606.32061-1-anthony.perard@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 34/35] libxl: libxl_retrieve_domain_configuration now uses ev_qmp X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Anthony PERARD , Ian Jackson , Wei Liu Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This was the last user of libxl__qmp_query_cpus which can now be removed. Signed-off-by: Anthony PERARD Acked-by: Ian Jackson --- tools/libxl/libxl_domain.c | 163 ++++++++++++++++++++++++++++------- tools/libxl/libxl_internal.h | 3 - tools/libxl/libxl_qmp.c | 38 -------- 3 files changed, 131 insertions(+), 73 deletions(-) diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c index b97e874a9c..6a8ffe10f0 100644 --- a/tools/libxl/libxl_domain.c +++ b/tools/libxl/libxl_domain.c @@ -1800,27 +1800,6 @@ uint32_t libxl_vm_get_start_time(libxl_ctx *ctx, uint32_t domid) return ret; } -/* For QEMU upstream we always need to provide the number of cpus present to - * QEMU whether they are online or not; otherwise QEMU won't accept the saved - * state. See implementation of libxl__qmp_query_cpus. - */ -static int libxl__update_avail_vcpus_qmp(libxl__gc *gc, uint32_t domid, - unsigned int max_vcpus, - libxl_bitmap *map) -{ - int rc; - - rc = libxl__qmp_query_cpus(gc, domid, map); - if (rc) { - LOGD(ERROR, domid, "Fail to get number of cpus"); - goto out; - } - - rc = 0; -out: - return rc; -} - static int libxl__update_avail_vcpus_xenstore(libxl__gc *gc, uint32_t domid, unsigned int max_vcpus, libxl_bitmap *map) @@ -1849,13 +1828,61 @@ static int libxl__update_avail_vcpus_xenstore(libxl__gc *gc, uint32_t domid, return rc; } +typedef struct { + libxl__ev_qmp qmp; + libxl__ev_time timeout; + libxl_domain_config *d_config; /* user pointer */ + libxl__ev_lock ev_lock; + libxl_bitmap qemuu_cpus; +} retrieve_domain_configuration_state; + +static void retrieve_domain_configuration_lock_acquired( + libxl__egc *egc, libxl__ev_lock *, int rc); +static void retrieve_domain_configuration_cpu_queried( + libxl__egc *egc, libxl__ev_qmp *qmp, + const libxl__json_object *response, int rc); +static void retrieve_domain_configuration_timeout(libxl__egc *egc, + libxl__ev_time *ev, const struct timeval *requested_abs, int rc); +static void retrieve_domain_configuration_end(libxl__egc *egc, + retrieve_domain_configuration_state *rdcs, int rc); + int libxl_retrieve_domain_configuration(libxl_ctx *ctx, uint32_t domid, libxl_domain_config *d_config, const libxl_asyncop_how *ao_how) { AO_CREATE(ctx, domid, ao_how); - int rc; + retrieve_domain_configuration_state *rdcs; + + GCNEW(rdcs); + libxl__ev_qmp_init(&rdcs->qmp); + rdcs->qmp.ao = ao; + rdcs->qmp.domid = domid; + rdcs->qmp.payload_fd = -1; + libxl__ev_time_init(&rdcs->timeout); + rdcs->d_config = d_config; + libxl_bitmap_init(&rdcs->qemuu_cpus); + libxl__ev_lock_init(&rdcs->ev_lock); + rdcs->ev_lock.ao = ao; + rdcs->ev_lock.domid = domid; + rdcs->ev_lock.callback = retrieve_domain_configuration_lock_acquired; + libxl__ev_lock_get(egc, &rdcs->ev_lock); + return AO_INPROGRESS; +} + +static void retrieve_domain_configuration_lock_acquired( + libxl__egc *egc, libxl__ev_lock *ev_lock, int rc) +{ + retrieve_domain_configuration_state *rdcs = + CONTAINER_OF(ev_lock, *rdcs, ev_lock); + STATE_AO_GC(rdcs->qmp.ao); libxl__domain_userdata_lock *lock = NULL; + bool has_callback = false; + + /* Convenience aliases */ + libxl_domid domid = rdcs->qmp.domid; + libxl_domain_config *const d_config = rdcs->d_config; + + if (rc) goto out; lock = libxl__lock_domain_userdata(gc, domid); if (!lock) { @@ -1870,10 +1897,81 @@ int libxl_retrieve_domain_configuration(libxl_ctx *ctx, uint32_t domid, goto out; } + libxl__unlock_domain_userdata(lock); + lock = NULL; + + /* We start by querying QEMU, if it is running, for its cpumap as this + * is a long operation. */ + if (d_config->b_info.type == LIBXL_DOMAIN_TYPE_HVM && + libxl__device_model_version_running(gc, domid) == + LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN) { + /* For QEMU upstream we always need to provide the number + * of cpus present to QEMU whether they are online or not; + * otherwise QEMU won't accept the saved state. + */ + rc = libxl__ev_time_register_rel(ao, &rdcs->timeout, + retrieve_domain_configuration_timeout, + LIBXL_QMP_CMD_TIMEOUT * 1000); + if (rc) goto out; + libxl_bitmap_alloc(CTX, &rdcs->qemuu_cpus, + d_config->b_info.max_vcpus); + rdcs->qmp.callback = retrieve_domain_configuration_cpu_queried; + rc = libxl__ev_qmp_send(gc, &rdcs->qmp, "query-cpus", NULL); + if (rc) goto out; + has_callback = true; + } + +out: + if (lock) libxl__unlock_domain_userdata(lock); + if (!has_callback) + retrieve_domain_configuration_end(egc, rdcs, rc); +} + +static void retrieve_domain_configuration_cpu_queried( + libxl__egc *egc, libxl__ev_qmp *qmp, + const libxl__json_object *response, int rc) +{ + EGC_GC; + retrieve_domain_configuration_state *rdcs = + CONTAINER_OF(qmp, *rdcs, qmp); + + if (rc) goto out; + + rc = qmp_parse_query_cpus(gc, qmp->domid, response, &rdcs->qemuu_cpus); + +out: + retrieve_domain_configuration_end(egc, rdcs, rc); +} + +static void retrieve_domain_configuration_timeout(libxl__egc *egc, + libxl__ev_time *ev, const struct timeval *requested_abs, int rc) +{ + retrieve_domain_configuration_state *rdcs = + CONTAINER_OF(ev, *rdcs, timeout); + + retrieve_domain_configuration_end(egc, rdcs, rc); +} + +static void retrieve_domain_configuration_end(libxl__egc *egc, + retrieve_domain_configuration_state *rdcs, int rc) +{ + STATE_AO_GC(rdcs->qmp.ao); + libxl__domain_userdata_lock *lock; + + /* Convenience aliases */ + libxl_domain_config *const d_config = rdcs->d_config; + libxl_domid domid = rdcs->qmp.domid; + + lock = libxl__lock_domain_userdata(gc, domid); + if (!lock) { + rc = ERROR_LOCK_FAIL; + goto out; + } + /* Domain name */ { char *domname; - domname = libxl_domid_to_name(ctx, domid); + domname = libxl_domid_to_name(CTX, domid); if (!domname) { LOGD(ERROR, domid, "Fail to get domain name"); goto out; @@ -1886,13 +1984,13 @@ int libxl_retrieve_domain_configuration(libxl_ctx *ctx, uint32_t domid, { libxl_dominfo info; libxl_dominfo_init(&info); - rc = libxl_domain_info(ctx, &info, domid); + rc = libxl_domain_info(CTX, &info, domid); if (rc) { LOGD(ERROR, domid, "Fail to get domain info"); libxl_dominfo_dispose(&info); goto out; } - libxl_uuid_copy(ctx, &d_config->c_info.uuid, &info.uuid); + libxl_uuid_copy(CTX, &d_config->c_info.uuid, &info.uuid); libxl_dominfo_dispose(&info); } @@ -1913,8 +2011,7 @@ int libxl_retrieve_domain_configuration(libxl_ctx *ctx, uint32_t domid, assert(version != LIBXL_DEVICE_MODEL_VERSION_UNKNOWN); switch (version) { case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN: - rc = libxl__update_avail_vcpus_qmp(gc, domid, - max_vcpus, map); + libxl_bitmap_copy(CTX, map, &rdcs->qemuu_cpus); break; case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL: rc = libxl__update_avail_vcpus_xenstore(gc, domid, @@ -1939,6 +2036,7 @@ int libxl_retrieve_domain_configuration(libxl_ctx *ctx, uint32_t domid, } } + /* Memory limits: * * Currently there are three memory limits: @@ -1972,7 +2070,7 @@ int libxl_retrieve_domain_configuration(libxl_ctx *ctx, uint32_t domid, /* Scheduler params */ { libxl_domain_sched_params_dispose(&d_config->b_info.sched_params); - rc = libxl_domain_sched_params_get(ctx, domid, + rc = libxl_domain_sched_params_get(CTX, domid, &d_config->b_info.sched_params); if (rc) { LOGD(ERROR, domid, "Fail to get scheduler parameters"); @@ -2034,7 +2132,7 @@ int libxl_retrieve_domain_configuration(libxl_ctx *ctx, uint32_t domid, if (j < num) { /* found in xenstore */ if (dt->merge) - dt->merge(ctx, p + dt->dev_elem_size * j, q); + dt->merge(CTX, p + dt->dev_elem_size * j, q); } else { /* not found in xenstore */ LOGD(WARN, domid, "Device present in JSON but not in xenstore, ignored"); @@ -2062,11 +2160,12 @@ int libxl_retrieve_domain_configuration(libxl_ctx *ctx, uint32_t domid, } out: + libxl__ev_unlock(gc, &rdcs->ev_lock); if (lock) libxl__unlock_domain_userdata(lock); - if (rc) - return AO_CREATE_FAIL(rc); + libxl_bitmap_dispose(&rdcs->qemuu_cpus); + libxl__ev_qmp_dispose(gc, &rdcs->qmp); + libxl__ev_time_deregister(gc, &rdcs->timeout); libxl__ao_complete(egc, ao, rc); - return AO_INPROGRESS; } /* diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h index 03e99b23f5..9144bc202d 100644 --- a/tools/libxl/libxl_internal.h +++ b/tools/libxl/libxl_internal.h @@ -1987,9 +1987,6 @@ _hidden libxl__qmp_handler *libxl__qmp_initialize(libxl__gc *gc, _hidden int libxl__qmp_resume(libxl__gc *gc, int domid); /* Load current QEMU state from file. */ _hidden int libxl__qmp_restore(libxl__gc *gc, int domid, const char *filename); -/* Query the bitmap of CPUs */ -_hidden int libxl__qmp_query_cpus(libxl__gc *gc, int domid, - libxl_bitmap *map); /* Start NBD server */ _hidden int libxl__qmp_nbd_server_start(libxl__gc *gc, int domid, const char *host, const char *port); diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c index 27183bc6c4..9639d491d9 100644 --- a/tools/libxl/libxl_qmp.c +++ b/tools/libxl/libxl_qmp.c @@ -767,44 +767,6 @@ int libxl__qmp_resume(libxl__gc *gc, int domid) return qmp_run_command(gc, domid, "cont", NULL, NULL, NULL); } -static int query_cpus_callback(libxl__qmp_handler *qmp, - const libxl__json_object *response, - void *opaque) -{ - libxl_bitmap *map = opaque; - unsigned int i; - const libxl__json_object *cpu = NULL; - int rc; - GC_INIT(qmp->ctx); - - libxl_bitmap_set_none(map); - for (i = 0; (cpu = libxl__json_array_get(response, i)); i++) { - unsigned int idx; - const libxl__json_object *o; - - o = libxl__json_map_get("CPU", cpu, JSON_INTEGER); - if (!o) { - LOGD(ERROR, qmp->domid, "Failed to retrieve CPU index."); - rc = ERROR_FAIL; - goto out; - } - - idx = libxl__json_object_get_integer(o); - libxl_bitmap_set(map, idx); - } - - rc = 0; -out: - GC_FREE; - return rc; -} - -int libxl__qmp_query_cpus(libxl__gc *gc, int domid, libxl_bitmap *map) -{ - return qmp_run_command(gc, domid, "query-cpus", NULL, - query_cpus_callback, map); -} - int libxl__qmp_nbd_server_start(libxl__gc *gc, int domid, const char *host, const char *port) {