From patchwork Wed Oct 30 18:07:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anthony PERARD X-Patchwork-Id: 11219907 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C7F471390 for ; Wed, 30 Oct 2019 18:08:49 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 97A1E2087E for ; Wed, 30 Oct 2019 18:08:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="J+caZEMS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 97A1E2087E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iPsNa-0003dN-Rl; Wed, 30 Oct 2019 18:07:34 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iPsNZ-0003cf-Hz for xen-devel@lists.xenproject.org; Wed, 30 Oct 2019 18:07:33 +0000 X-Inumbo-ID: 1723b28c-fb40-11e9-9534-12813bfff9fa Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 1723b28c-fb40-11e9-9534-12813bfff9fa; Wed, 30 Oct 2019 18:07:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1572458830; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OfOXjorivbLT4PbaawAwzKLX6dVbZ6pTYAMsz1FPAE4=; b=J+caZEMSUWyopP7u3mYL3PsTtTeWdQlaeroiiK2m8quPZZQdIrSsc7+F mBVdlA2rcbtKTFP56kQDgPKFzJRRTOQ8Np61GPnMqe4tRzWiACz85Dbku oz9ThzRv3cLBOQXSudr7gNTpWUD6jpTuNDbEX1QUQ4jN1A+0mGO3itZqh Y=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=anthony.perard@citrix.com; spf=Pass smtp.mailfrom=anthony.perard@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of anthony.perard@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="anthony.perard@citrix.com"; x-sender="anthony.perard@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of anthony.perard@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="anthony.perard@citrix.com"; x-sender="anthony.perard@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="anthony.perard@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: FKnDAV7oJ2RF9VcWb3UYAYcEfsGUL6QPahW8AOYF+pPmSq2j6p8Bdl2jgzJsH6Hk/Ekbtp3tdN eIxVidQh9/cBlaPMirHGpkxE9+FwDilxh2r8mGEjQemFckX859l8W88Cq6LJmqDSNtkXzvfpEb pj0aerQ1QmulfLj6TfwnJ+t3br63XiZ1/DprYYGwfgoUkX8JbAWJrZToEoSd9Xx23wI+QL0l6p hb+1QvOsLTMsrSjPNr4ipYQ6W9iv6rp6B//zcLNZUXCKHYl2sszSeAPTz3ec6RY2q/rJl/miBb QfY= X-SBRS: 2.7 X-MesageID: 7637912 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.68,248,1569297600"; d="scan'208";a="7637912" From: Anthony PERARD To: Date: Wed, 30 Oct 2019 18:07:04 +0000 Message-ID: <20191030180704.261320-7-anthony.perard@citrix.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191030180704.261320-1-anthony.perard@citrix.com> References: <20191030180704.261320-1-anthony.perard@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [XEN PATCH for-4.13 v2 6/6] libxl_qmp: Have a lock for QMP socket access X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Anthony PERARD , =?utf-8?b?SsO8cmdlbiBHcm8=?= =?utf-8?b?w58=?= , Ian Jackson , Wei Liu , Sander Eikelenboom Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" This patch workaround the fact that it's not possible to connect multiple time to a single QMP socket. QEMU listen on the socket with a backlog value of 1, which mean that on Linux when concurrent thread call connect() on the socket, they get EAGAIN. Background: This happens when attempting to create a guest with multiple pci devices passthrough, libxl creates one connection per device to attach and execute connect() on all at once before any single connection has finished. To work around this, we use a new lock. Error handling of connect() and lock() is a bit awkward as libxl__ev_qmp_send() doesn't allow to call the callback synchronously. So we setup a timer to have a callback that has been called asynchronously. We use the _abs variant it does strictly less than _rel, thus avoiding unnecessary code that could return an error (unnecessary because we only need to have the callback been called ASAP). Reported-by: Sander Eikelenboom Signed-off-by: Anthony PERARD --- Notes: v2: - Handle error path tools/libxl/libxl_internal.c | 5 ++ tools/libxl/libxl_internal.h | 9 +++ tools/libxl/libxl_qmp.c | 109 ++++++++++++++++++++++++++++++----- 3 files changed, 108 insertions(+), 15 deletions(-) diff --git a/tools/libxl/libxl_internal.c b/tools/libxl/libxl_internal.c index b2084157e4cd..ba5637358e7c 100644 --- a/tools/libxl/libxl_internal.c +++ b/tools/libxl/libxl_internal.c @@ -590,6 +590,11 @@ void libxl__ev_devlock_init(libxl__ev_slowlock *lock) ev_slowlock_init_internal(lock, "libxl-device-changes-lock"); } +void libxl__ev_qmplock_init(libxl__ev_slowlock *lock) +{ + ev_slowlock_init_internal(lock, "qmp-socket-lock"); +} + static void ev_lock_prepare_fork(libxl__egc *egc, libxl__ev_slowlock *lock); static void ev_lock_child_callback(libxl__egc *egc, libxl__ev_child *child, pid_t pid, int status); diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h index f95895eae17d..b244b508f4c6 100644 --- a/tools/libxl/libxl_internal.h +++ b/tools/libxl/libxl_internal.h @@ -371,6 +371,9 @@ struct libxl__ev_child { * which may take a significant amount time. * It is to be acquired by an ao event callback. * + * If libxl__ev_devlock is needed, it should be acquired while every + * libxl__ev_qmp are Idle for the current domain. + * * It is to be acquired when adding/removing devices or making changes * to them when this is a slow operation and json_lock isn't appropriate. * @@ -411,6 +414,7 @@ struct libxl__ev_slowlock { bool held; }; _hidden void libxl__ev_devlock_init(libxl__ev_slowlock *); +_hidden void libxl__ev_qmplock_init(libxl__ev_slowlock *); _hidden void libxl__ev_slowlock_lock(libxl__egc *, libxl__ev_slowlock *); _hidden void libxl__ev_slowlock_unlock(libxl__gc *, libxl__ev_slowlock *); _hidden void libxl__ev_slowlock_dispose(libxl__gc *, libxl__ev_slowlock *); @@ -479,6 +483,8 @@ _hidden void libxl__ev_qmp_dispose(libxl__gc *gc, libxl__ev_qmp *ev); typedef enum { /* initial state */ qmp_state_disconnected = 1, + /* waiting for lock */ + qmp_state_waiting_lock, /* connected to QMP socket, waiting for greeting message */ qmp_state_connecting, /* qmp_capabilities command sent, waiting for reply */ @@ -512,6 +518,9 @@ struct libxl__ev_qmp { libxl__carefd *cfd; libxl__ev_fd efd; libxl__qmp_state state; + libxl__ev_slowlock lock; + libxl__ev_time etime; + int rc; int id; int next_id; /* next id to use */ /* receive buffer */ diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c index f0e0b50bd1c5..2087a85e321d 100644 --- a/tools/libxl/libxl_qmp.c +++ b/tools/libxl/libxl_qmp.c @@ -1084,6 +1084,7 @@ static void dm_state_saved(libxl__egc *egc, libxl__ev_qmp *ev, * * qmp_state External cfd efd id rx_buf* tx_buf* msg* * disconnected Idle NULL Idle reset free free free + * waiting_lock Active open Idle reset used free set * connecting Active open IN reset used free set * cap.neg Active open IN|OUT sent used cap_neg set * cap.neg Active open IN sent used free set @@ -1118,7 +1119,8 @@ static void dm_state_saved(libxl__egc *egc, libxl__ev_qmp *ev, * msg_id 0 id assoctiated with the command in `msg` * * - Allowed internal state transition: - * disconnected -> connecting + * disconnected -> waiting_lock + * waiting_lock -> connecting * connection -> capability_negotiation * capability_negotiation/connected -> waiting_reply * waiting_reply -> connected @@ -1153,6 +1155,10 @@ static void qmp_ev_ensure_reading_writing(libxl__gc *gc, libxl__ev_qmp *ev) { short events = POLLIN; + if (ev->state == qmp_state_waiting_lock) + /* We can't modifie the efd yet, as it isn't registered. */ + return; + if (ev->tx_buf) events |= POLLOUT; else if ((ev->state == qmp_state_waiting_reply) && ev->msg) @@ -1168,9 +1174,12 @@ static void qmp_ev_set_state(libxl__gc *gc, libxl__ev_qmp *ev, switch (new_state) { case qmp_state_disconnected: break; - case qmp_state_connecting: + case qmp_state_waiting_lock: assert(ev->state == qmp_state_disconnected); break; + case qmp_state_connecting: + assert(ev->state == qmp_state_waiting_lock); + break; case qmp_state_capability_negotiation: assert(ev->state == qmp_state_connecting); break; @@ -1231,20 +1240,23 @@ static int qmp_error_class_to_libxl_error_code(libxl__gc *gc, /* Setup connection */ -static int qmp_ev_connect(libxl__gc *gc, libxl__ev_qmp *ev) - /* disconnected -> connecting but with `msg` free +static void qmp_ev_lock_aquired(libxl__egc *, libxl__ev_slowlock *, + int rc); +static void lock_error_callback(libxl__egc *, libxl__ev_time *, + const struct timeval *, int rc); + +static int qmp_ev_connect(libxl__egc *egc, libxl__ev_qmp *ev) + /* disconnected -> waiting_lock/connecting but with `msg` free * on error: broken */ { + EGC_GC; int fd; - int rc, r; - struct sockaddr_un un; - const char *qmp_socket_path; - - assert(ev->state == qmp_state_disconnected); + int rc; - qmp_socket_path = libxl__qemu_qmp_path(gc, ev->domid); + /* Convenience aliases */ + libxl__ev_slowlock *lock = &ev->lock; - LOGD(DEBUG, ev->domid, "Connecting to %s", qmp_socket_path); + assert(ev->state == qmp_state_disconnected); libxl__carefd_begin(); fd = socket(AF_UNIX, SOCK_STREAM, 0); @@ -1258,6 +1270,34 @@ static int qmp_ev_connect(libxl__gc *gc, libxl__ev_qmp *ev) if (rc) goto out; + qmp_ev_set_state(gc, ev, qmp_state_waiting_lock); + + lock->ao = ev->ao; + lock->domid = ev->domid; + lock->callback = qmp_ev_lock_aquired; + libxl__ev_slowlock_lock(egc, &ev->lock); + + return 0; + +out: + return rc; +} + +static void qmp_ev_lock_aquired(libxl__egc *egc, libxl__ev_slowlock *lock, + int rc) +{ + libxl__ev_qmp *ev = CONTAINER_OF(lock, *ev, lock); + EGC_GC; + const char *qmp_socket_path; + struct sockaddr_un un; + int r; + + if (rc) goto out; + + qmp_socket_path = libxl__qemu_qmp_path(gc, ev->domid); + + LOGD(DEBUG, ev->domid, "Connecting to %s", qmp_socket_path); + rc = libxl__prepare_sockaddr_un(gc, &un, qmp_socket_path, "QMP socket"); if (rc) @@ -1279,10 +1319,43 @@ static int qmp_ev_connect(libxl__gc *gc, libxl__ev_qmp *ev) qmp_ev_set_state(gc, ev, qmp_state_connecting); - return 0; + return; out: - return rc; + /* An error occurred and we need to let the caller know. At this + * point, we can only do so via the callback. Unfortunately, the + * callback of libxl__ev_slowlock_lock() might be called synchronously, + * but libxl__ev_qmp_send() promise that it will not call the callback + * synchronously. So we have to arrange to call the callback + * asynchronously. */ + ev->rc = rc; + struct timeval now = { 0 }; + rc = libxl__ev_time_register_abs(ev->ao, &ev->etime, + lock_error_callback, now); + /* If setting up the timer failed, there is no way to tell the caller + * of libxl__ev_qmp_send() that the connection to the QMP socket + * failed. But they are supposed to have a timer of their own. */ + if (rc) + LOGD(ERROR, ev->domid, + "Failed to setup a callback call. rc=%d", rc); +} + +static void lock_error_callback(libxl__egc *egc, libxl__ev_time *t, + const struct timeval *requested_abs, + int rc) +{ + EGC_GC; + libxl__ev_qmp *ev = CONTAINER_OF(t, *ev, etime); + + /* We are only interested by the `rc' set during the setup of the + * connection to the QMP socket. */ + rc = ev->rc; + + /* On error, deallocate all private resources */ + libxl__ev_qmp_dispose(gc, ev); + + /* And tell libxl__ev_qmp user about the error */ + ev->callback(egc, ev, NULL, rc); /* must be last */ } /* QMP FD callbacks */ @@ -1779,11 +1852,15 @@ void libxl__ev_qmp_init(libxl__ev_qmp *ev) ev->qemu_version.major = -1; ev->qemu_version.minor = -1; ev->qemu_version.micro = -1; + + libxl__ev_qmplock_init(&ev->lock); + ev->rc = 0; + libxl__ev_time_init(&ev->etime); } int libxl__ev_qmp_send(libxl__egc *egc, libxl__ev_qmp *ev, const char *cmd, libxl__json_object *args) - /* disconnected -> connecting + /* disconnected -> waiting_lock/connecting * connected -> waiting_reply (with msg set) * on error: disconnected */ { @@ -1798,7 +1875,7 @@ int libxl__ev_qmp_send(libxl__egc *egc, libxl__ev_qmp *ev, /* Connect to QEMU if not already connected */ if (ev->state == qmp_state_disconnected) { - rc = qmp_ev_connect(gc, ev); + rc = qmp_ev_connect(egc, ev); if (rc) goto error; } @@ -1830,6 +1907,8 @@ void libxl__ev_qmp_dispose(libxl__gc *gc, libxl__ev_qmp *ev) libxl__ev_fd_deregister(gc, &ev->efd); libxl__carefd_close(ev->cfd); + libxl__ev_time_deregister(gc, &ev->etime); + libxl__ev_slowlock_dispose(gc, &ev->lock); libxl__ev_qmp_init(ev); }