From patchwork Thu Dec 2 15:34:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 12652871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 97DBCC433FE for ; Thu, 2 Dec 2021 15:38:54 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.236809.410726 (Exim 4.92) (envelope-from ) id 1msoAY-0007UT-M6; Thu, 02 Dec 2021 15:38:46 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 236809.410726; Thu, 02 Dec 2021 15:38:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1msoAY-0007UI-He; Thu, 02 Dec 2021 15:38:46 +0000 Received: by outflank-mailman (input) for mailman id 236809; Thu, 02 Dec 2021 15:38:45 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1msoAX-0007B9-Kg for xen-devel@lists.xenproject.org; Thu, 02 Dec 2021 15:38:45 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id ee8858d9-5385-11ec-b1df-f38ee3fbfdf7; Thu, 02 Dec 2021 16:38:44 +0100 (CET) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-588-6dZiHXC6PU2fCrw_itfNlw-1; Thu, 02 Dec 2021 10:38:40 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 266F51017965; Thu, 2 Dec 2021 15:38:39 +0000 (UTC) Received: from localhost (unknown [10.39.193.31]) by smtp.corp.redhat.com (Postfix) with ESMTP id B6E4F17D61; Thu, 2 Dec 2021 15:38:38 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ee8858d9-5385-11ec-b1df-f38ee3fbfdf7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1638459523; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GODRO8VwL654MSY/Ejdch+2nt9+ibQMEAAK5LSSE3hM=; b=VRJKdhFz4LrF5gMquQ190veOy/PgPLD6aPWPIS8PxbMLmJPd4bBzl3YFQ+ktkrnf+lkMCn RjbbHIzZujRGZkt9etFhJN7+jbFRFsDura0ITU6yJqyKNhldT0CDwzu9QkNkF5WgFmC+9U V5bswnLexOQ1MjqbGUQN/ttAPs7lIM4= X-MC-Unique: 6dZiHXC6PU2fCrw_itfNlw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Stefan Hajnoczi , Kevin Wolf , "Richard W.M. Jones" , Stefano Garzarella , Paolo Bonzini , Aarushi Mehta , Ronnie Sahlberg , "Michael S. Tsirkin" , Julia Suvorova , Juan Quintela , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Anthony Perard , Paul Durrant , Coiby Xu , qemu-block@nongnu.org, "Dr. David Alan Gilbert" , xen-devel@lists.xenproject.org, Stefan Weil , Fam Zheng , Stefano Stabellini , Peter Lieven Subject: [PATCH v2 6/6] virtio: unify dataplane and non-dataplane ->handle_output() Date: Thu, 2 Dec 2021 15:34:02 +0000 Message-Id: <20211202153402.604951-7-stefanha@redhat.com> In-Reply-To: <20211202153402.604951-1-stefanha@redhat.com> References: <20211202153402.604951-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=stefanha@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Now that virtio-blk and virtio-scsi are ready, get rid of the handle_aio_output() callback. It's no longer needed. Signed-off-by: Stefan Hajnoczi --- include/hw/virtio/virtio.h | 4 +-- hw/block/dataplane/virtio-blk.c | 16 ++-------- hw/scsi/virtio-scsi-dataplane.c | 54 ++++----------------------------- hw/virtio/virtio.c | 32 +++++++++---------- 4 files changed, 26 insertions(+), 80 deletions(-) diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h index b90095628f..f095637058 100644 --- a/include/hw/virtio/virtio.h +++ b/include/hw/virtio/virtio.h @@ -316,8 +316,8 @@ bool virtio_device_ioeventfd_enabled(VirtIODevice *vdev); EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq); void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled); void virtio_queue_host_notifier_read(EventNotifier *n); -void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx, - VirtIOHandleOutput handle_output); +void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx); +void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx); VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector); VirtQueue *virtio_vector_next_queue(VirtQueue *vq); diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index 1b50ccd38b..f88f08ef59 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -154,17 +154,6 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s) g_free(s); } -static void virtio_blk_data_plane_handle_output(VirtIODevice *vdev, - VirtQueue *vq) -{ - VirtIOBlock *s = (VirtIOBlock *)vdev; - - assert(s->dataplane); - assert(s->dataplane_started); - - virtio_blk_handle_vq(s, vq); -} - /* Context: QEMU global mutex held */ int virtio_blk_data_plane_start(VirtIODevice *vdev) { @@ -258,8 +247,7 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev) for (i = 0; i < nvqs; i++) { VirtQueue *vq = virtio_get_queue(s->vdev, i); - virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, - virtio_blk_data_plane_handle_output); + virtio_queue_aio_attach_host_notifier(vq, s->ctx); } aio_context_release(s->ctx); return 0; @@ -302,7 +290,7 @@ static void virtio_blk_data_plane_stop_bh(void *opaque) for (i = 0; i < s->conf->num_queues; i++) { VirtQueue *vq = virtio_get_queue(s->vdev, i); - virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, NULL); + virtio_queue_aio_detach_host_notifier(vq, s->ctx); } } diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c index 76137de67f..29575cbaf6 100644 --- a/hw/scsi/virtio-scsi-dataplane.c +++ b/hw/scsi/virtio-scsi-dataplane.c @@ -49,45 +49,6 @@ void virtio_scsi_dataplane_setup(VirtIOSCSI *s, Error **errp) } } -static void virtio_scsi_data_plane_handle_cmd(VirtIODevice *vdev, - VirtQueue *vq) -{ - VirtIOSCSI *s = VIRTIO_SCSI(vdev); - - virtio_scsi_acquire(s); - if (!s->dataplane_fenced) { - assert(s->ctx && s->dataplane_started); - virtio_scsi_handle_cmd_vq(s, vq); - } - virtio_scsi_release(s); -} - -static void virtio_scsi_data_plane_handle_ctrl(VirtIODevice *vdev, - VirtQueue *vq) -{ - VirtIOSCSI *s = VIRTIO_SCSI(vdev); - - virtio_scsi_acquire(s); - if (!s->dataplane_fenced) { - assert(s->ctx && s->dataplane_started); - virtio_scsi_handle_ctrl_vq(s, vq); - } - virtio_scsi_release(s); -} - -static void virtio_scsi_data_plane_handle_event(VirtIODevice *vdev, - VirtQueue *vq) -{ - VirtIOSCSI *s = VIRTIO_SCSI(vdev); - - virtio_scsi_acquire(s); - if (!s->dataplane_fenced) { - assert(s->ctx && s->dataplane_started); - virtio_scsi_handle_event_vq(s, vq); - } - virtio_scsi_release(s); -} - static int virtio_scsi_set_host_notifier(VirtIOSCSI *s, VirtQueue *vq, int n) { BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(s))); @@ -112,10 +73,10 @@ static void virtio_scsi_dataplane_stop_bh(void *opaque) VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s); int i; - virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, NULL); - virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, NULL); + virtio_queue_aio_detach_host_notifier(vs->ctrl_vq, s->ctx); + virtio_queue_aio_detach_host_notifier(vs->event_vq, s->ctx); for (i = 0; i < vs->conf.num_queues; i++) { - virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, NULL); + virtio_queue_aio_detach_host_notifier(vs->cmd_vqs[i], s->ctx); } } @@ -176,14 +137,11 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev) memory_region_transaction_commit(); aio_context_acquire(s->ctx); - virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, - virtio_scsi_data_plane_handle_ctrl); - virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, - virtio_scsi_data_plane_handle_event); + virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx); + virtio_queue_aio_attach_host_notifier(vs->event_vq, s->ctx); for (i = 0; i < vs->conf.num_queues; i++) { - virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, - virtio_scsi_data_plane_handle_cmd); + virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx); } s->dataplane_starting = false; diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index a97a406d3c..ce182a4e57 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -3522,23 +3522,23 @@ static void virtio_queue_host_notifier_aio_poll_end(EventNotifier *n) virtio_queue_set_notification(vq, 1); } -void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx, - VirtIOHandleOutput handle_output) +void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx) { - if (handle_output) { - aio_set_event_notifier(ctx, &vq->host_notifier, true, - virtio_queue_host_notifier_read, - virtio_queue_host_notifier_aio_poll, - virtio_queue_host_notifier_aio_poll_ready); - aio_set_event_notifier_poll(ctx, &vq->host_notifier, - virtio_queue_host_notifier_aio_poll_begin, - virtio_queue_host_notifier_aio_poll_end); - } else { - aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL); - /* Test and clear notifier before after disabling event, - * in case poll callback didn't have time to run. */ - virtio_queue_host_notifier_read(&vq->host_notifier); - } + aio_set_event_notifier(ctx, &vq->host_notifier, true, + virtio_queue_host_notifier_read, + virtio_queue_host_notifier_aio_poll, + virtio_queue_host_notifier_aio_poll_ready); + aio_set_event_notifier_poll(ctx, &vq->host_notifier, + virtio_queue_host_notifier_aio_poll_begin, + virtio_queue_host_notifier_aio_poll_end); +} + +void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx) +{ + aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL); + /* Test and clear notifier before after disabling event, + * in case poll callback didn't have time to run. */ + virtio_queue_host_notifier_read(&vq->host_notifier); } void virtio_queue_host_notifier_read(EventNotifier *n)