From patchwork Tue Nov 8 17:07:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eugenio Perez Martin X-Patchwork-Id: 13036553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 843F0C43217 for ; Tue, 8 Nov 2022 17:09:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234164AbiKHRJR (ORCPT ); Tue, 8 Nov 2022 12:09:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234229AbiKHRJO (ORCPT ); Tue, 8 Nov 2022 12:09:14 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE6334C267 for ; Tue, 8 Nov 2022 09:08:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667927296; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BrFBzOip2xxuT2jz42nrG4oeTa43nCgGcjdRVPxQm7Q=; b=fhkz1dLriM1nZEqY2Ara1xh6C2DfJNitoxRqH4cSa1VSk8+azyUtS0JRlB21IzMrfTJ6K/ dz7d3cawTu/XxYhla3mmA/tbZU1VhPnevMnGMDb2cLDx/m/rzJJdGwTS6Hf6ip3bKge3gI Xfy+1YGjoatJwKHju+2J+IgZ8HnL5Ro= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-606-DfMxGGgvNGG7Wg5sy8FE5g-1; Tue, 08 Nov 2022 12:08:13 -0500 X-MC-Unique: DfMxGGgvNGG7Wg5sy8FE5g-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 400A4800186; Tue, 8 Nov 2022 17:08:12 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.47]) by smtp.corp.redhat.com (Postfix) with ESMTP id 344DAC15BB5; Tue, 8 Nov 2022 17:08:09 +0000 (UTC) From: =?utf-8?q?Eugenio_P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Parav Pandit , Stefan Hajnoczi , Si-Wei Liu , Laurent Vivier , Harpreet Singh Anand , "Michael S. Tsirkin" , Gautam Dawar , Liuxiangdong , Stefano Garzarella , Jason Wang , Cindy Lu , Eli Cohen , Cornelia Huck , Zhu Lingshan , kvm@vger.kernel.org, "Gonglei (Arei)" , Paolo Bonzini Subject: [PATCH v6 03/10] vhost: Allocate SVQ device file descriptors at device start Date: Tue, 8 Nov 2022 18:07:48 +0100 Message-Id: <20221108170755.92768-4-eperezma@redhat.com> In-Reply-To: <20221108170755.92768-1-eperezma@redhat.com> References: <20221108170755.92768-1-eperezma@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The next patches will start control SVQ if possible. However, we don't know if that will be possible at qemu boot anymore. Delay device file descriptors until we know it at device start. Signed-off-by: Eugenio PĂ©rez Acked-by: Jason Wang --- hw/virtio/vhost-shadow-virtqueue.c | 31 ++------------------------ hw/virtio/vhost-vdpa.c | 35 ++++++++++++++++++++++++------ 2 files changed, 30 insertions(+), 36 deletions(-) diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c index 264ddc166d..3b05bab44d 100644 --- a/hw/virtio/vhost-shadow-virtqueue.c +++ b/hw/virtio/vhost-shadow-virtqueue.c @@ -715,43 +715,18 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq) * @iova_tree: Tree to perform descriptors translations * @ops: SVQ owner callbacks * @ops_opaque: ops opaque pointer - * - * Returns the new virtqueue or NULL. - * - * In case of error, reason is reported through error_report. */ VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree, const VhostShadowVirtqueueOps *ops, void *ops_opaque) { - g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1); - int r; - - r = event_notifier_init(&svq->hdev_kick, 0); - if (r != 0) { - error_report("Couldn't create kick event notifier: %s (%d)", - g_strerror(errno), errno); - goto err_init_hdev_kick; - } - - r = event_notifier_init(&svq->hdev_call, 0); - if (r != 0) { - error_report("Couldn't create call event notifier: %s (%d)", - g_strerror(errno), errno); - goto err_init_hdev_call; - } + VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1); event_notifier_init_fd(&svq->svq_kick, VHOST_FILE_UNBIND); svq->iova_tree = iova_tree; svq->ops = ops; svq->ops_opaque = ops_opaque; - return g_steal_pointer(&svq); - -err_init_hdev_call: - event_notifier_cleanup(&svq->hdev_kick); - -err_init_hdev_kick: - return NULL; + return svq; } /** @@ -763,7 +738,5 @@ void vhost_svq_free(gpointer pvq) { VhostShadowVirtqueue *vq = pvq; vhost_svq_stop(vq); - event_notifier_cleanup(&vq->hdev_kick); - event_notifier_cleanup(&vq->hdev_call); g_free(vq); } diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 7f0ff4df5b..3df2775760 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -428,15 +428,11 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v, shadow_vqs = g_ptr_array_new_full(hdev->nvqs, vhost_svq_free); for (unsigned n = 0; n < hdev->nvqs; ++n) { - g_autoptr(VhostShadowVirtqueue) svq; + VhostShadowVirtqueue *svq; svq = vhost_svq_new(v->iova_tree, v->shadow_vq_ops, v->shadow_vq_ops_opaque); - if (unlikely(!svq)) { - error_setg(errp, "Cannot create svq %u", n); - return -1; - } - g_ptr_array_add(shadow_vqs, g_steal_pointer(&svq)); + g_ptr_array_add(shadow_vqs, svq); } v->shadow_vqs = g_steal_pointer(&shadow_vqs); @@ -864,11 +860,23 @@ static int vhost_vdpa_svq_set_fds(struct vhost_dev *dev, const EventNotifier *event_notifier = &svq->hdev_kick; int r; + r = event_notifier_init(&svq->hdev_kick, 0); + if (r != 0) { + error_setg_errno(errp, -r, "Couldn't create kick event notifier"); + goto err_init_hdev_kick; + } + + r = event_notifier_init(&svq->hdev_call, 0); + if (r != 0) { + error_setg_errno(errp, -r, "Couldn't create call event notifier"); + goto err_init_hdev_call; + } + file.fd = event_notifier_get_fd(event_notifier); r = vhost_vdpa_set_vring_dev_kick(dev, &file); if (unlikely(r != 0)) { error_setg_errno(errp, -r, "Can't set device kick fd"); - return r; + goto err_init_set_dev_fd; } event_notifier = &svq->hdev_call; @@ -876,8 +884,18 @@ static int vhost_vdpa_svq_set_fds(struct vhost_dev *dev, r = vhost_vdpa_set_vring_dev_call(dev, &file); if (unlikely(r != 0)) { error_setg_errno(errp, -r, "Can't set device call fd"); + goto err_init_set_dev_fd; } + return 0; + +err_init_set_dev_fd: + event_notifier_set_handler(&svq->hdev_call, NULL); + +err_init_hdev_call: + event_notifier_cleanup(&svq->hdev_kick); + +err_init_hdev_kick: return r; } @@ -1089,6 +1107,9 @@ static void vhost_vdpa_svqs_stop(struct vhost_dev *dev) for (unsigned i = 0; i < v->shadow_vqs->len; ++i) { VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, i); vhost_vdpa_svq_unmap_rings(dev, svq); + + event_notifier_cleanup(&svq->hdev_kick); + event_notifier_cleanup(&svq->hdev_call); } }