From patchwork Wed Apr 19 17:28:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3DEF5C77B78 for ; Wed, 19 Apr 2023 17:28:56 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523642.813899 (Exim 4.92) (envelope-from ) id 1ppBbd-0002DZ-Np; Wed, 19 Apr 2023 17:28:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523642.813899; Wed, 19 Apr 2023 17:28:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbd-0002DS-L2; Wed, 19 Apr 2023 17:28:33 +0000 Received: by outflank-mailman (input) for mailman id 523642; Wed, 19 Apr 2023 17:28:32 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbc-0001ia-BW for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:32 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 98e55cf1-ded7-11ed-b21f-6b7b168915f2; Wed, 19 Apr 2023 19:28:29 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-171-_tjGd7aHPKSYhHr_wbflLA-1; Wed, 19 Apr 2023 13:28:24 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 19BA985A5A3; Wed, 19 Apr 2023 17:28:23 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id 81D2D483EC4; Wed, 19 Apr 2023 17:28:22 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 98e55cf1-ded7-11ed-b21f-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925308; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TxF1pTMYrXy3osQY76QFRJXTyPl1McCp6rJWUy+E5CI=; b=hRQQiwtAIG4oo+d8OQ4SbFcTvtubQTjwAozD9IkIe61VBzhyO56Ar1v9a4vMv+Vcv7KmUd +4FP35yF6pl8lfVxSMLPtOmG5FnXBcLAMztYYcsXR2xc2uD1s/PVsZjuTrREbV2i0rqBLX lSsXaQbB63JH7NfatWH7GGEv81pb1Jg= X-MC-Unique: _tjGd7aHPKSYhHr_wbflLA-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse Subject: [PATCH v2 01/16] hw/qdev: introduce qdev_is_realized() helper Date: Wed, 19 Apr 2023 13:28:02 -0400 Message-Id: <20230419172817.272758-2-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Add a helper function to check whether the device is realized without requiring the Big QEMU Lock. The next patch adds a second caller. The goal is to avoid spreading DeviceState field accesses throughout the code. Suggested-by: Philippe Mathieu-Daudé Signed-off-by: Stefan Hajnoczi Reviewed-by: Philippe Mathieu-Daudé --- include/hw/qdev-core.h | 17 ++++++++++++++--- hw/scsi/scsi-bus.c | 3 +-- 2 files changed, 15 insertions(+), 5 deletions(-) diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h index bd50ad5ee1..4d734cf35e 100644 --- a/include/hw/qdev-core.h +++ b/include/hw/qdev-core.h @@ -1,6 +1,7 @@ #ifndef QDEV_CORE_H #define QDEV_CORE_H +#include "qemu/atomic.h" #include "qemu/queue.h" #include "qemu/bitmap.h" #include "qemu/rcu.h" @@ -164,9 +165,6 @@ struct NamedClockList { /** * DeviceState: - * @realized: Indicates whether the device has been fully constructed. - * When accessed outside big qemu lock, must be accessed with - * qatomic_load_acquire() * @reset: ResettableState for the device; handled by Resettable interface. * * This structure should not be accessed directly. We declare it here @@ -332,6 +330,19 @@ DeviceState *qdev_new(const char *name); */ DeviceState *qdev_try_new(const char *name); +/** + * qdev_is_realized: + * @dev: The device to check. + * + * May be called outside big qemu lock. + * + * Returns: %true% if the device has been fully constructed, %false% otherwise. + */ +static inline bool qdev_is_realized(DeviceState *dev) +{ + return qatomic_load_acquire(&dev->realized); +} + /** * qdev_realize: Realize @dev. * @dev: device to realize diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c index c97176110c..07275fb631 100644 --- a/hw/scsi/scsi-bus.c +++ b/hw/scsi/scsi-bus.c @@ -60,8 +60,7 @@ static SCSIDevice *do_scsi_device_find(SCSIBus *bus, * the user access the device. */ - if (retval && !include_unrealized && - !qatomic_load_acquire(&retval->qdev.realized)) { + if (retval && !include_unrealized && !qdev_is_realized(&retval->qdev)) { retval = NULL; } From patchwork Wed Apr 19 17:28:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1BAD9C77B7E for ; Wed, 19 Apr 2023 17:28:57 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523641.813889 (Exim 4.92) (envelope-from ) id 1ppBbc-0001xm-G2; Wed, 19 Apr 2023 17:28:32 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523641.813889; Wed, 19 Apr 2023 17:28:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbc-0001xf-DC; Wed, 19 Apr 2023 17:28:32 +0000 Received: by outflank-mailman (input) for mailman id 523641; Wed, 19 Apr 2023 17:28:31 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbb-0001ia-BU for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:31 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 99217fe4-ded7-11ed-b21f-6b7b168915f2; Wed, 19 Apr 2023 19:28:29 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-196-Qupe_d5wMdSygq6czM087A-1; Wed, 19 Apr 2023 13:28:27 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D81E485C06D; Wed, 19 Apr 2023 17:28:25 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id 208C6C16024; Wed, 19 Apr 2023 17:28:24 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 99217fe4-ded7-11ed-b21f-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925308; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ct1ICfub90ccCopLja+U3Sq9ZQX/65uvuN+fpfLMIoc=; b=JXD632/F9qlMu+j6HuP+J9WZudIIE2fWrz48o6YGHH9epoaWs3P+5hP3Mtey49f0ILHNHA 1r0E9d8Ygk44FZDVEXhEI5QyIfd5FPvno8DvAZ9Ub602XAK65dLCbFesrbjLb6vZqOJRYg vRh9rohEqEz52jeNz7s/oK0eO6sJP1o= X-MC-Unique: Qupe_d5wMdSygq6czM087A-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse , Daniil Tatianin Subject: [PATCH v2 02/16] virtio-scsi: avoid race between unplug and transport event Date: Wed, 19 Apr 2023 13:28:03 -0400 Message-Id: <20230419172817.272758-3-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Only report a transport reset event to the guest after the SCSIDevice has been unrealized by qdev_simple_device_unplug_cb(). qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized field to false so that scsi_device_find/get() no longer see it. scsi_target_emulate_report_luns() also needs to be updated to filter out SCSIDevices that are unrealized. These changes ensure that the guest driver does not see the SCSIDevice that's being unplugged if it responds very quickly to the transport reset event. Reviewed-by: Paolo Bonzini Reviewed-by: Michael S. Tsirkin Reviewed-by: Daniil Tatianin Signed-off-by: Stefan Hajnoczi --- hw/scsi/scsi-bus.c | 3 ++- hw/scsi/virtio-scsi.c | 18 +++++++++--------- 2 files changed, 11 insertions(+), 10 deletions(-) diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c index 07275fb631..64d7311757 100644 --- a/hw/scsi/scsi-bus.c +++ b/hw/scsi/scsi-bus.c @@ -486,7 +486,8 @@ static bool scsi_target_emulate_report_luns(SCSITargetReq *r) DeviceState *qdev = kid->child; SCSIDevice *dev = SCSI_DEVICE(qdev); - if (dev->channel == channel && dev->id == id && dev->lun != 0) { + if (dev->channel == channel && dev->id == id && dev->lun != 0 && + qdev_is_realized(&dev->qdev)) { store_lun(tmp, dev->lun); g_byte_array_append(buf, tmp, 8); len += 8; diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 612c525d9d..000961446c 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -1063,15 +1063,6 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev, SCSIDevice *sd = SCSI_DEVICE(dev); AioContext *ctx = s->ctx ?: qemu_get_aio_context(); - if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) { - virtio_scsi_acquire(s); - virtio_scsi_push_event(s, sd, - VIRTIO_SCSI_T_TRANSPORT_RESET, - VIRTIO_SCSI_EVT_RESET_REMOVED); - scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED)); - virtio_scsi_release(s); - } - aio_disable_external(ctx); qdev_simple_device_unplug_cb(hotplug_dev, dev, errp); aio_enable_external(ctx); @@ -1082,6 +1073,15 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev, blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL); virtio_scsi_release(s); } + + if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) { + virtio_scsi_acquire(s); + virtio_scsi_push_event(s, sd, + VIRTIO_SCSI_T_TRANSPORT_RESET, + VIRTIO_SCSI_EVT_RESET_REMOVED); + scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED)); + virtio_scsi_release(s); + } } static struct SCSIBusInfo virtio_scsi_scsi_info = { From patchwork Wed Apr 19 17:28:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 125CCC77B73 for ; Wed, 19 Apr 2023 17:28:56 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523643.813909 (Exim 4.92) (envelope-from ) id 1ppBbf-0002TQ-06; Wed, 19 Apr 2023 17:28:35 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523643.813909; Wed, 19 Apr 2023 17:28:34 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbe-0002TJ-TV; Wed, 19 Apr 2023 17:28:34 +0000 Received: by outflank-mailman (input) for mailman id 523643; Wed, 19 Apr 2023 17:28:33 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbd-0001ia-Br for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:33 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9a72062f-ded7-11ed-b21f-6b7b168915f2; Wed, 19 Apr 2023 19:28:31 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-78-r-TkgJWjMKico9NCxDBaVg-1; Wed, 19 Apr 2023 13:28:29 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4E9633801F59; Wed, 19 Apr 2023 17:28:28 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5CB2D40C2064; Wed, 19 Apr 2023 17:28:27 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9a72062f-ded7-11ed-b21f-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925310; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mEUiDYhTGG6/grGbCAPXx5jwh+m436EKZLWWN7O6BQM=; b=BhrFeIxCW9QgSBsPhxIzmYgXwF4ynexsTTCtwuv8ZSeFV0jGMBFJ2MNHwYtUqeyOh5Dsr1 3JQOJPSUC7tS23brp+NgPvi2qjrVjxWms1DAp9fKKGXbHNAWhbRWL4H5s2G3R627eLtDZM 94o726pLFl8s0+jYx56nZBr5tnBVlh8= X-MC-Unique: r-TkgJWjMKico9NCxDBaVg-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse , Zhengui Li , Daniil Tatianin Subject: [PATCH v2 03/16] virtio-scsi: stop using aio_disable_external() during unplug Date: Wed, 19 Apr 2023 13:28:04 -0400 Message-Id: <20230419172817.272758-4-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 This patch is part of an effort to remove the aio_disable_external() API because it does not fit in a multi-queue block layer world where many AioContexts may be submitting requests to the same disk. The SCSI emulation code is already in good shape to stop using aio_disable_external(). It was only used by commit 9c5aad84da1c ("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi disk") to ensure that virtio_scsi_hotunplug() works while the guest driver is submitting I/O. Ensure virtio_scsi_hotunplug() is safe as follows: 1. qdev_simple_device_unplug_cb() -> qdev_unrealize() -> device_set_realized() calls qatomic_set(&dev->realized, false) so that future scsi_device_get() calls return NULL because they exclude SCSIDevices with realized=false. That means virtio-scsi will reject new I/O requests to this SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while virtio_scsi_hotunplug() is still executing. We are protected against new requests! 2. Add a call to scsi_device_purge_requests() from scsi_unrealize() so that in-flight requests are cancelled synchronously. This ensures that no in-flight requests remain once qdev_simple_device_unplug_cb() returns. Thanks to these two conditions we don't need aio_disable_external() anymore. Cc: Zhengui Li Reviewed-by: Paolo Bonzini Reviewed-by: Daniil Tatianin Signed-off-by: Stefan Hajnoczi --- hw/scsi/scsi-disk.c | 1 + hw/scsi/virtio-scsi.c | 3 --- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c index 97c9b1c8cd..e01bd84541 100644 --- a/hw/scsi/scsi-disk.c +++ b/hw/scsi/scsi-disk.c @@ -2522,6 +2522,7 @@ static void scsi_realize(SCSIDevice *dev, Error **errp) static void scsi_unrealize(SCSIDevice *dev) { + scsi_device_purge_requests(dev, SENSE_CODE(RESET)); del_boot_device_lchs(&dev->qdev, NULL); } diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 000961446c..a02f9233ec 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -1061,11 +1061,8 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev, VirtIODevice *vdev = VIRTIO_DEVICE(hotplug_dev); VirtIOSCSI *s = VIRTIO_SCSI(vdev); SCSIDevice *sd = SCSI_DEVICE(dev); - AioContext *ctx = s->ctx ?: qemu_get_aio_context(); - aio_disable_external(ctx); qdev_simple_device_unplug_cb(hotplug_dev, dev, errp); - aio_enable_external(ctx); if (s->ctx) { virtio_scsi_acquire(s); From patchwork Wed Apr 19 17:28:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5019AC77B78 for ; Wed, 19 Apr 2023 17:29:02 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523644.813919 (Exim 4.92) (envelope-from ) id 1ppBbl-0002rN-HL; Wed, 19 Apr 2023 17:28:41 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523644.813919; Wed, 19 Apr 2023 17:28:41 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbl-0002rA-D0; Wed, 19 Apr 2023 17:28:41 +0000 Received: by outflank-mailman (input) for mailman id 523644; Wed, 19 Apr 2023 17:28:40 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbk-0001ia-Fs for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:40 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9f49f692-ded7-11ed-b21f-6b7b168915f2; Wed, 19 Apr 2023 19:28:40 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-178-QXXZW6VEOQuQZxBq_W3LMA-1; Wed, 19 Apr 2023 13:28:32 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E9D802823810; Wed, 19 Apr 2023 17:28:30 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5495F2166B33; Wed, 19 Apr 2023 17:28:30 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9f49f692-ded7-11ed-b21f-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925318; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cpef4kT6gqf7n/rR0/ynntTmCBLqt8QRSEC7iM5/ics=; b=Gj6JkrpS82YHcn37K3ZEmocqzGqkD3zrNgIxXNj+0sB3sBh53Zp3l8EvES1VVrz3CY4IWV KvTOfpbRICb++cmuJfEIWLQ4csRiUVOktAjTo7MakOILzRRYGruIUXAKL2RCfrYHPdMm3O pshpmkeniLnTV34rAt204jArzf/n3TI= X-MC-Unique: QXXZW6VEOQuQZxBq_W3LMA-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse Subject: [PATCH v2 04/16] block/export: only acquire AioContext once for vhost_user_server_stop() Date: Wed, 19 Apr 2023 13:28:05 -0400 Message-Id: <20230419172817.272758-5-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 vhost_user_server_stop() uses AIO_WAIT_WHILE(). AIO_WAIT_WHILE() requires that AioContext is only acquired once. Since blk_exp_request_shutdown() already acquires the AioContext it shouldn't be acquired again in vhost_user_server_stop(). Signed-off-by: Stefan Hajnoczi --- util/vhost-user-server.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 40f36ea214..5b6216069c 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -346,10 +346,9 @@ static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc, aio_context_release(server->ctx); } +/* server->ctx acquired by caller */ void vhost_user_server_stop(VuServer *server) { - aio_context_acquire(server->ctx); - qemu_bh_delete(server->restart_listener_bh); server->restart_listener_bh = NULL; @@ -366,8 +365,6 @@ void vhost_user_server_stop(VuServer *server) AIO_WAIT_WHILE(server->ctx, server->co_trip); } - aio_context_release(server->ctx); - if (server->listener) { qio_net_listener_disconnect(server->listener); object_unref(OBJECT(server->listener)); From patchwork Wed Apr 19 17:28:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A0C0AC77B73 for ; Wed, 19 Apr 2023 17:29:04 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523646.813939 (Exim 4.92) (envelope-from ) id 1ppBbp-0003Ss-3z; Wed, 19 Apr 2023 17:28:45 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523646.813939; Wed, 19 Apr 2023 17:28:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbp-0003Sh-04; Wed, 19 Apr 2023 17:28:45 +0000 Received: by outflank-mailman (input) for mailman id 523646; Wed, 19 Apr 2023 17:28:44 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbo-00036z-4B for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:44 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 9fb41f81-ded7-11ed-8611-37d641c3527e; Wed, 19 Apr 2023 19:28:40 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-608-H9zMJQ8GPMyry8zMS7zBDw-1; Wed, 19 Apr 2023 13:28:34 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3A49285A588; Wed, 19 Apr 2023 17:28:33 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9305D492C3E; Wed, 19 Apr 2023 17:28:32 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9fb41f81-ded7-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925319; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=leMfEV1uRMjOAqE+D3iAWq8u1umrQobSbCNMGjDuzrE=; b=KXbY1NielY7yQfYfm+k9uRC64FeecyuiWCy6KQYfbItRrbE3LiMbvEcyaBnCf54npF3mMg 79X+MFK2URSVT8BxA2YY3nXvsYU4+pL5lmcyRCcTC8eZrbD9DrAxM6qGfVzfXwVYu5ySVf 4KXgXSEZQUjKzJ38GZGfQdkEo7OjUvk= X-MC-Unique: H9zMJQ8GPMyry8zMS7zBDw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse Subject: [PATCH v2 05/16] util/vhost-user-server: rename refcount to in_flight counter Date: Wed, 19 Apr 2023 13:28:06 -0400 Message-Id: <20230419172817.272758-6-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 The VuServer object has a refcount field and ref/unref APIs. The name is confusing because it's actually an in-flight request counter instead of a refcount. Normally a refcount destroys the object upon reaching zero. The VuServer counter is used to wake up the vhost-user coroutine when there are no more requests. Avoid confusing by renaming refcount and ref/unref to in_flight and inc/dec. Reviewed-by: Paolo Bonzini Signed-off-by: Stefan Hajnoczi Reviewed-by: Philippe Mathieu-Daudé --- include/qemu/vhost-user-server.h | 6 +++--- block/export/vhost-user-blk-server.c | 11 +++++++---- util/vhost-user-server.c | 14 +++++++------- 3 files changed, 17 insertions(+), 14 deletions(-) diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h index 25c72433ca..bc0ac9ddb6 100644 --- a/include/qemu/vhost-user-server.h +++ b/include/qemu/vhost-user-server.h @@ -41,7 +41,7 @@ typedef struct { const VuDevIface *vu_iface; /* Protected by ctx lock */ - unsigned int refcount; + unsigned int in_flight; bool wait_idle; VuDev vu_dev; QIOChannel *ioc; /* The I/O channel with the client */ @@ -60,8 +60,8 @@ bool vhost_user_server_start(VuServer *server, void vhost_user_server_stop(VuServer *server); -void vhost_user_server_ref(VuServer *server); -void vhost_user_server_unref(VuServer *server); +void vhost_user_server_inc_in_flight(VuServer *server); +void vhost_user_server_dec_in_flight(VuServer *server); void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx); void vhost_user_server_detach_aio_context(VuServer *server); diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c index 3409d9e02e..e93f2ed6b4 100644 --- a/block/export/vhost-user-blk-server.c +++ b/block/export/vhost-user-blk-server.c @@ -49,7 +49,10 @@ static void vu_blk_req_complete(VuBlkReq *req, size_t in_len) free(req); } -/* Called with server refcount increased, must decrease before returning */ +/* + * Called with server in_flight counter increased, must decrease before + * returning. + */ static void coroutine_fn vu_blk_virtio_process_req(void *opaque) { VuBlkReq *req = opaque; @@ -67,12 +70,12 @@ static void coroutine_fn vu_blk_virtio_process_req(void *opaque) in_num, out_num); if (in_len < 0) { free(req); - vhost_user_server_unref(server); + vhost_user_server_dec_in_flight(server); return; } vu_blk_req_complete(req, in_len); - vhost_user_server_unref(server); + vhost_user_server_dec_in_flight(server); } static void vu_blk_process_vq(VuDev *vu_dev, int idx) @@ -94,7 +97,7 @@ static void vu_blk_process_vq(VuDev *vu_dev, int idx) Coroutine *co = qemu_coroutine_create(vu_blk_virtio_process_req, req); - vhost_user_server_ref(server); + vhost_user_server_inc_in_flight(server); qemu_coroutine_enter(co); } } diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 5b6216069c..1622f8cfb3 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -75,16 +75,16 @@ static void panic_cb(VuDev *vu_dev, const char *buf) error_report("vu_panic: %s", buf); } -void vhost_user_server_ref(VuServer *server) +void vhost_user_server_inc_in_flight(VuServer *server) { assert(!server->wait_idle); - server->refcount++; + server->in_flight++; } -void vhost_user_server_unref(VuServer *server) +void vhost_user_server_dec_in_flight(VuServer *server) { - server->refcount--; - if (server->wait_idle && !server->refcount) { + server->in_flight--; + if (server->wait_idle && !server->in_flight) { aio_co_wake(server->co_trip); } } @@ -192,13 +192,13 @@ static coroutine_fn void vu_client_trip(void *opaque) /* Keep running */ } - if (server->refcount) { + if (server->in_flight) { /* Wait for requests to complete before we can unmap the memory */ server->wait_idle = true; qemu_coroutine_yield(); server->wait_idle = false; } - assert(server->refcount == 0); + assert(server->in_flight == 0); vu_deinit(vu_dev); From patchwork Wed Apr 19 17:28:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 36916C6FD18 for ; Wed, 19 Apr 2023 17:29:01 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523645.813929 (Exim 4.92) (envelope-from ) id 1ppBbm-00039G-QD; Wed, 19 Apr 2023 17:28:42 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523645.813929; Wed, 19 Apr 2023 17:28:42 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbm-000390-MF; Wed, 19 Apr 2023 17:28:42 +0000 Received: by outflank-mailman (input) for mailman id 523645; Wed, 19 Apr 2023 17:28:41 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbl-0001ia-R8 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:41 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a0094f25-ded7-11ed-b21f-6b7b168915f2; Wed, 19 Apr 2023 19:28:41 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-486-UcPUP67CM-aZaF8Q22BlDA-1; Wed, 19 Apr 2023 13:28:36 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 962993C14844; Wed, 19 Apr 2023 17:28:35 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id D2833492B04; Wed, 19 Apr 2023 17:28:34 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a0094f25-ded7-11ed-b21f-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925320; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+bLg6uTzP7Qgt6lQ3xKdY10AY3Nt3xLke1uGL1u9ShU=; b=JoXg8kWo+HvsWqgyV4zeD1qapFv8eXMPyGRyIG15Js1YHFbfccP8uA3OdR9F5R08P+LnEC lNb41IBhFjndMGGX68m2kcqOknhn9XYEBmH5GRWgdQ9RrjxuZFGWg1WDZ5hpPLHyNFaId/ cYRQScHyAdh2epn9Ma/uTmzylurGPC0= X-MC-Unique: UcPUP67CM-aZaF8Q22BlDA-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse Subject: [PATCH v2 06/16] block/export: wait for vhost-user-blk requests when draining Date: Wed, 19 Apr 2023 13:28:07 -0400 Message-Id: <20230419172817.272758-7-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Each vhost-user-blk request runs in a coroutine. When the BlockBackend enters a drained section we need to enter a quiescent state. Currently any in-flight requests race with bdrv_drained_begin() because it is unaware of vhost-user-blk requests. When blk_co_preadv/pwritev()/etc returns it wakes the bdrv_drained_begin() thread but vhost-user-blk request processing has not yet finished. The request coroutine continues executing while the main loop thread thinks it is in a drained section. One example where this is unsafe is for blk_set_aio_context() where bdrv_drained_begin() is called before .aio_context_detached() and .aio_context_attach(). If request coroutines are still running after bdrv_drained_begin(), then the AioContext could change underneath them and they race with new requests processed in the new AioContext. This could lead to virtqueue corruption, for example. (This example is theoretical, I came across this while reading the code and have not tried to reproduce it.) It's easy to make bdrv_drained_begin() wait for in-flight requests: add a .drained_poll() callback that checks the VuServer's in-flight counter. VuServer just needs an API that returns true when there are requests in flight. The in-flight counter needs to be atomic. Signed-off-by: Stefan Hajnoczi --- include/qemu/vhost-user-server.h | 4 +++- block/export/vhost-user-blk-server.c | 19 +++++++++++++++++++ util/vhost-user-server.c | 14 ++++++++++---- 3 files changed, 32 insertions(+), 5 deletions(-) diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h index bc0ac9ddb6..b1c1cda886 100644 --- a/include/qemu/vhost-user-server.h +++ b/include/qemu/vhost-user-server.h @@ -40,8 +40,9 @@ typedef struct { int max_queues; const VuDevIface *vu_iface; + unsigned int in_flight; /* atomic */ + /* Protected by ctx lock */ - unsigned int in_flight; bool wait_idle; VuDev vu_dev; QIOChannel *ioc; /* The I/O channel with the client */ @@ -62,6 +63,7 @@ void vhost_user_server_stop(VuServer *server); void vhost_user_server_inc_in_flight(VuServer *server); void vhost_user_server_dec_in_flight(VuServer *server); +bool vhost_user_server_has_in_flight(VuServer *server); void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx); void vhost_user_server_detach_aio_context(VuServer *server); diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c index e93f2ed6b4..dbf5207162 100644 --- a/block/export/vhost-user-blk-server.c +++ b/block/export/vhost-user-blk-server.c @@ -254,6 +254,22 @@ static void vu_blk_exp_request_shutdown(BlockExport *exp) vhost_user_server_stop(&vexp->vu_server); } +/* + * Ensures that bdrv_drained_begin() waits until in-flight requests complete. + * + * Called with vexp->export.ctx acquired. + */ +static bool vu_blk_drained_poll(void *opaque) +{ + VuBlkExport *vexp = opaque; + + return vhost_user_server_has_in_flight(&vexp->vu_server); +} + +static const BlockDevOps vu_blk_dev_ops = { + .drained_poll = vu_blk_drained_poll, +}; + static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts, Error **errp) { @@ -292,6 +308,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts, vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg, logical_block_size, num_queues); + blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp); blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach, vexp); @@ -299,6 +316,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts, num_queues, &vu_blk_iface, errp)) { blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach, vexp); + blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); return -EADDRNOTAVAIL; } @@ -312,6 +330,7 @@ static void vu_blk_exp_delete(BlockExport *exp) blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach, vexp); + blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); } diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 1622f8cfb3..2e6b640050 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -78,17 +78,23 @@ static void panic_cb(VuDev *vu_dev, const char *buf) void vhost_user_server_inc_in_flight(VuServer *server) { assert(!server->wait_idle); - server->in_flight++; + qatomic_inc(&server->in_flight); } void vhost_user_server_dec_in_flight(VuServer *server) { - server->in_flight--; - if (server->wait_idle && !server->in_flight) { - aio_co_wake(server->co_trip); + if (qatomic_fetch_dec(&server->in_flight) == 1) { + if (server->wait_idle) { + aio_co_wake(server->co_trip); + } } } +bool vhost_user_server_has_in_flight(VuServer *server) +{ + return qatomic_load_acquire(&server->in_flight) > 0; +} + static bool coroutine_fn vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg) { From patchwork Wed Apr 19 17:28:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1470FC77B78 for ; Wed, 19 Apr 2023 17:29:07 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523647.813949 (Exim 4.92) (envelope-from ) id 1ppBbr-0003u9-DI; Wed, 19 Apr 2023 17:28:47 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523647.813949; Wed, 19 Apr 2023 17:28:47 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbr-0003ty-9T; Wed, 19 Apr 2023 17:28:47 +0000 Received: by outflank-mailman (input) for mailman id 523647; Wed, 19 Apr 2023 17:28:45 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbp-00036z-ES for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:45 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id a1822235-ded7-11ed-8611-37d641c3527e; Wed, 19 Apr 2023 19:28:43 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-140-B9iU9cznPoul00GA4s393w-1; Wed, 19 Apr 2023 13:28:39 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E5A10185A791; Wed, 19 Apr 2023 17:28:37 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4AB5E492C3E; Wed, 19 Apr 2023 17:28:37 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a1822235-ded7-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925322; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rd8h2e2Z0uhjua/p0PDxA8jGvK3/DbHaZUIqG5W/DmA=; b=JqtDMXgTsQ1H/W3AelkJtzxSnizjEvx8CqYrFlyEfVH6FhqxhFs3jDEd7tXSt1yIBqWju2 uYUrWz4CCHxkhpxpBtAomjv2/kln1f2WRxBPoPvQvqt6i0b/dEMV88b9/5F9JJ+liwEblz wsdtVNkUz94QhY+XEc97qIYcRp3WBFs= X-MC-Unique: B9iU9cznPoul00GA4s393w-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse Subject: [PATCH v2 07/16] block/export: stop using is_external in vhost-user-blk server Date: Wed, 19 Apr 2023 13:28:08 -0400 Message-Id: <20230419172817.272758-8-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 vhost-user activity must be suspended during bdrv_drained_begin/end(). This prevents new requests from interfering with whatever is happening in the drained section. Previously this was done using aio_set_fd_handler()'s is_external argument. In a multi-queue block layer world the aio_disable_external() API cannot be used since multiple AioContext may be processing I/O, not just one. Switch to BlockDevOps->drained_begin/end() callbacks. Signed-off-by: Stefan Hajnoczi --- block/export/vhost-user-blk-server.c | 43 ++++++++++++++-------------- util/vhost-user-server.c | 10 +++---- 2 files changed, 26 insertions(+), 27 deletions(-) diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c index dbf5207162..6e1bc196fb 100644 --- a/block/export/vhost-user-blk-server.c +++ b/block/export/vhost-user-blk-server.c @@ -207,22 +207,6 @@ static const VuDevIface vu_blk_iface = { .process_msg = vu_blk_process_msg, }; -static void blk_aio_attached(AioContext *ctx, void *opaque) -{ - VuBlkExport *vexp = opaque; - - vexp->export.ctx = ctx; - vhost_user_server_attach_aio_context(&vexp->vu_server, ctx); -} - -static void blk_aio_detach(void *opaque) -{ - VuBlkExport *vexp = opaque; - - vhost_user_server_detach_aio_context(&vexp->vu_server); - vexp->export.ctx = NULL; -} - static void vu_blk_initialize_config(BlockDriverState *bs, struct virtio_blk_config *config, @@ -254,6 +238,25 @@ static void vu_blk_exp_request_shutdown(BlockExport *exp) vhost_user_server_stop(&vexp->vu_server); } +/* Called with vexp->export.ctx acquired */ +static void vu_blk_drained_begin(void *opaque) +{ + VuBlkExport *vexp = opaque; + + vhost_user_server_detach_aio_context(&vexp->vu_server); +} + +/* Called with vexp->export.blk AioContext acquired */ +static void vu_blk_drained_end(void *opaque) +{ + VuBlkExport *vexp = opaque; + + /* Refresh AioContext in case it changed */ + vexp->export.ctx = blk_get_aio_context(vexp->export.blk); + + vhost_user_server_attach_aio_context(&vexp->vu_server, vexp->export.ctx); +} + /* * Ensures that bdrv_drained_begin() waits until in-flight requests complete. * @@ -267,6 +270,8 @@ static bool vu_blk_drained_poll(void *opaque) } static const BlockDevOps vu_blk_dev_ops = { + .drained_begin = vu_blk_drained_begin, + .drained_end = vu_blk_drained_end, .drained_poll = vu_blk_drained_poll, }; @@ -309,13 +314,9 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts, logical_block_size, num_queues); blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp); - blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach, - vexp); if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx, num_queues, &vu_blk_iface, errp)) { - blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, - blk_aio_detach, vexp); blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); return -EADDRNOTAVAIL; @@ -328,8 +329,6 @@ static void vu_blk_exp_delete(BlockExport *exp) { VuBlkExport *vexp = container_of(exp, VuBlkExport, export); - blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach, - vexp); blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); } diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 2e6b640050..332aea9306 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt, vu_fd_watch->fd = fd; vu_fd_watch->cb = cb; qemu_socket_set_nonblock(fd); - aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler, + aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler, NULL, NULL, NULL, vu_fd_watch); vu_fd_watch->vu_dev = vu_dev; vu_fd_watch->pvt = pvt; @@ -299,7 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd) if (!vu_fd_watch) { return; } - aio_set_fd_handler(server->ioc->ctx, fd, true, + aio_set_fd_handler(server->ioc->ctx, fd, false, NULL, NULL, NULL, NULL, NULL); QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next); @@ -362,7 +362,7 @@ void vhost_user_server_stop(VuServer *server) VuFdWatch *vu_fd_watch; QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, NULL, NULL, NULL, NULL, vu_fd_watch); } @@ -403,7 +403,7 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx) qio_channel_attach_aio_context(server->ioc, ctx); QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(ctx, vu_fd_watch->fd, true, kick_handler, NULL, + aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL, NULL, NULL, vu_fd_watch); } @@ -417,7 +417,7 @@ void vhost_user_server_detach_aio_context(VuServer *server) VuFdWatch *vu_fd_watch; QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, NULL, NULL, NULL, NULL, vu_fd_watch); } From patchwork Wed Apr 19 17:28:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5C7A3C6FD18 for ; Wed, 19 Apr 2023 17:29:09 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523649.813959 (Exim 4.92) (envelope-from ) id 1ppBbs-0004HF-Tx; Wed, 19 Apr 2023 17:28:48 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523649.813959; Wed, 19 Apr 2023 17:28:48 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbs-0004Fo-Op; Wed, 19 Apr 2023 17:28:48 +0000 Received: by outflank-mailman (input) for mailman id 523649; Wed, 19 Apr 2023 17:28:47 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbr-00036z-HM for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:47 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id a2d7273d-ded7-11ed-8611-37d641c3527e; Wed, 19 Apr 2023 19:28:45 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-581-gx2svh4pM8KqGWApR1ZRlw-1; Wed, 19 Apr 2023 13:28:41 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5B7E68996E0; Wed, 19 Apr 2023 17:28:40 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id BABDC2026D16; Wed, 19 Apr 2023 17:28:39 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a2d7273d-ded7-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925324; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1INS84S/tNTOXSgAzqbNWqBblrNC9wvg2yPbndQJO20=; b=PJFWR14PpG+d062hNmHOVKeOR3b+9CRPzlKJj0AN2yN96fPIUJCYgRnycJU4eNPoK2czPl WzQs9676Xf6UYI/IlGb2NncQfFRH559BK/HqFbnjJ0xHHGYP8s5f6GCxBVI1Wgmu+KRZbn ARV0ivAhVH4AvVRYv6dsfIIOUQffyBo= X-MC-Unique: gx2svh4pM8KqGWApR1ZRlw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse , David Woodhouse Subject: [PATCH v2 08/16] hw/xen: do not use aio_set_fd_handler(is_external=true) in xen_xenstore Date: Wed, 19 Apr 2023 13:28:09 -0400 Message-Id: <20230419172817.272758-9-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 There is no need to suspend activity between aio_disable_external() and aio_enable_external(), which is mainly used for the block layer's drain operation. This is part of ongoing work to remove the aio_disable_external() API. Reviewed-by: David Woodhouse Signed-off-by: Stefan Hajnoczi Reviewed-by: Paul Durrant --- hw/i386/kvm/xen_xenstore.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c index 900679af8a..6e81bc8791 100644 --- a/hw/i386/kvm/xen_xenstore.c +++ b/hw/i386/kvm/xen_xenstore.c @@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Error **errp) error_setg(errp, "Xenstore evtchn port init failed"); return; } - aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), true, + aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), false, xen_xenstore_event, NULL, NULL, NULL, s); s->impl = xs_impl_create(xen_domid); From patchwork Wed Apr 19 17:28:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217198 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 332CBC77B73 for ; Wed, 19 Apr 2023 17:29:09 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523651.813969 (Exim 4.92) (envelope-from ) id 1ppBbu-0004bj-At; Wed, 19 Apr 2023 17:28:50 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523651.813969; Wed, 19 Apr 2023 17:28:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbu-0004b6-4l; Wed, 19 Apr 2023 17:28:50 +0000 Received: by outflank-mailman (input) for mailman id 523651; Wed, 19 Apr 2023 17:28:49 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBbt-0001ia-BU for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:49 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a47b38c0-ded7-11ed-b21f-6b7b168915f2; Wed, 19 Apr 2023 19:28:48 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-542-_TxeHOorMreDsIuRonGL0g-1; Wed, 19 Apr 2023 13:28:43 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 42CE31C189B0; Wed, 19 Apr 2023 17:28:42 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id B1BA2492C3E; Wed, 19 Apr 2023 17:28:41 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a47b38c0-ded7-11ed-b21f-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925327; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6sTqQmJRDK/JwUQj6FoYqmlPFJsGwZK9pc+TSucKp9o=; b=Mv8C/c4wquXxxhjQPf7nJkxAOdO8BEPQw2gqJE7arr1WGda4/uL01w6m3THBIC/eELrKCX aisiPPFcfWpujsDIL1eiKTkbccxC5Id0DXBYmVxoTl0V/POOQIUlxXHL7DD4nsaDm1VeUR 5n7SB5drWtpAf3hdY1lVRdFUwD3MWwE= X-MC-Unique: _TxeHOorMreDsIuRonGL0g-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse Subject: [PATCH v2 09/16] block: add blk_in_drain() API Date: Wed, 19 Apr 2023 13:28:10 -0400 Message-Id: <20230419172817.272758-10-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 The BlockBackend quiesce_counter is greater than zero during drained sections. Add an API to check whether the BlockBackend is in a drained section. The next patch will use this API. Signed-off-by: Stefan Hajnoczi --- include/sysemu/block-backend-global-state.h | 1 + block/block-backend.c | 7 +++++++ 2 files changed, 8 insertions(+) diff --git a/include/sysemu/block-backend-global-state.h b/include/sysemu/block-backend-global-state.h index 2b6d27db7c..ac7cbd6b5e 100644 --- a/include/sysemu/block-backend-global-state.h +++ b/include/sysemu/block-backend-global-state.h @@ -78,6 +78,7 @@ void blk_activate(BlockBackend *blk, Error **errp); int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags); void blk_aio_cancel(BlockAIOCB *acb); int blk_commit_all(void); +bool blk_in_drain(BlockBackend *blk); void blk_drain(BlockBackend *blk); void blk_drain_all(void); void blk_set_on_error(BlockBackend *blk, BlockdevOnError on_read_error, diff --git a/block/block-backend.c b/block/block-backend.c index 9e0f48692a..e0a1d9ec0f 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -1270,6 +1270,13 @@ blk_check_byte_request(BlockBackend *blk, int64_t offset, int64_t bytes) return 0; } +/* Are we currently in a drained section? */ +bool blk_in_drain(BlockBackend *blk) +{ + GLOBAL_STATE_CODE(); /* change to IO_OR_GS_CODE(), if necessary */ + return qatomic_read(&blk->quiesce_counter); +} + /* To be called between exactly one pair of blk_inc/dec_in_flight() */ static void coroutine_fn blk_wait_while_drained(BlockBackend *blk) { From patchwork Wed Apr 19 17:28:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217202 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AB968C6FD18 for ; Wed, 19 Apr 2023 17:39:13 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523673.814005 (Exim 4.92) (envelope-from ) id 1ppBlm-0000LP-6M; Wed, 19 Apr 2023 17:39:02 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523673.814005; Wed, 19 Apr 2023 17:39:01 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBll-0000IR-Mj; Wed, 19 Apr 2023 17:39:01 +0000 Received: by outflank-mailman (input) for mailman id 523673; Wed, 19 Apr 2023 17:38:59 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBcF-0001ia-1N for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:29:11 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id b17cf833-ded7-11ed-b21f-6b7b168915f2; Wed, 19 Apr 2023 19:29:10 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-561-t53o4P_VPVaZURreVJcYqg-1; Wed, 19 Apr 2023 13:29:06 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 186651C189A4; Wed, 19 Apr 2023 17:29:05 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id C83E540BC798; Wed, 19 Apr 2023 17:28:43 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b17cf833-ded7-11ed-b21f-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925349; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TBoAHwSzsacA9uVcaYGhwuzX40gWj9dN8dhM1t9uyLM=; b=ghl6274+2MaUilmMocS4No0tX1RYmpz8NG2nuTYF6E13PtUUkMSLNhklTCIqivjJyPq7+F 9RUd5MihozaH7yLfLGhtA4IhmlAeC12gTtoA52fHSU8UdJh4wcPULefWO+kluGv/nueh3q SxR8kS0AdwplrB/TKo/fB18LMGRwACQ= X-MC-Unique: t53o4P_VPVaZURreVJcYqg-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse Subject: [PATCH v2 10/16] block: drain from main loop thread in bdrv_co_yield_to_drain() Date: Wed, 19 Apr 2023 13:28:11 -0400 Message-Id: <20230419172817.272758-11-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 For simplicity, always run BlockDevOps .drained_begin/end/poll() callbacks in the main loop thread. This makes it easier to implement the callbacks and avoids extra locks. Move the function pointer declarations from the I/O Code section to the Global State section in block-backend-common.h. Signed-off-by: Stefan Hajnoczi --- include/sysemu/block-backend-common.h | 25 +++++++++++++------------ block/io.c | 3 ++- 2 files changed, 15 insertions(+), 13 deletions(-) diff --git a/include/sysemu/block-backend-common.h b/include/sysemu/block-backend-common.h index 2391679c56..780cea7305 100644 --- a/include/sysemu/block-backend-common.h +++ b/include/sysemu/block-backend-common.h @@ -59,6 +59,19 @@ typedef struct BlockDevOps { */ bool (*is_medium_locked)(void *opaque); + /* + * Runs when the backend receives a drain request. + */ + void (*drained_begin)(void *opaque); + /* + * Runs when the backend's last drain request ends. + */ + void (*drained_end)(void *opaque); + /* + * Is the device still busy? + */ + bool (*drained_poll)(void *opaque); + /* * I/O API functions. These functions are thread-safe. * @@ -76,18 +89,6 @@ typedef struct BlockDevOps { * Runs when the size changed (e.g. monitor command block_resize) */ void (*resize_cb)(void *opaque); - /* - * Runs when the backend receives a drain request. - */ - void (*drained_begin)(void *opaque); - /* - * Runs when the backend's last drain request ends. - */ - void (*drained_end)(void *opaque); - /* - * Is the device still busy? - */ - bool (*drained_poll)(void *opaque); } BlockDevOps; /* diff --git a/block/io.c b/block/io.c index db438c7657..6285d67546 100644 --- a/block/io.c +++ b/block/io.c @@ -331,7 +331,8 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs, if (ctx != co_ctx) { aio_context_release(ctx); } - replay_bh_schedule_oneshot_event(ctx, bdrv_co_drain_bh_cb, &data); + replay_bh_schedule_oneshot_event(qemu_get_aio_context(), + bdrv_co_drain_bh_cb, &data); qemu_coroutine_yield(); /* If we are resumed from some other event (such as an aio completion or a From patchwork Wed Apr 19 17:28:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2AC44C77B7E for ; Wed, 19 Apr 2023 17:39:14 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523668.813979 (Exim 4.92) (envelope-from ) id 1ppBlk-0008NZ-9t; Wed, 19 Apr 2023 17:39:00 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523668.813979; Wed, 19 Apr 2023 17:39:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBlk-0008NS-7E; Wed, 19 Apr 2023 17:39:00 +0000 Received: by outflank-mailman (input) for mailman id 523668; Wed, 19 Apr 2023 17:38:58 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBcK-00036z-NX for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:29:16 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id b40ec7e9-ded7-11ed-8611-37d641c3527e; Wed, 19 Apr 2023 19:29:14 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-649-EFEK4HXsNuO_lqX_O-YnCw-1; Wed, 19 Apr 2023 13:29:09 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5B48A1C189A7; Wed, 19 Apr 2023 17:29:08 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id BA5F8C08492; Wed, 19 Apr 2023 17:29:07 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b40ec7e9-ded7-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925353; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pr0VKOkDxuZ5hinADqgnbKz4Kufrp9KJiNLqpg8yv3Q=; b=XbgJMXcZ5G0LgnQYMqMOsRWh9n1JNMXcHY7N3fuaTRezUEqJUa3iW64P+uYSd0c/Q4+Exx 2gUw21WWqKVLxD5hQ/Ri4S2Erb/wm7nWoktuxZU8BLxaIy50bK38Cm8V2XForXcmEzQsZp 24S9fibe23r8ZFdiE+e1C8x4Wd0yu3g= X-MC-Unique: EFEK4HXsNuO_lqX_O-YnCw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse Subject: [PATCH v2 11/16] xen-block: implement BlockDevOps->drained_begin() Date: Wed, 19 Apr 2023 13:28:12 -0400 Message-Id: <20230419172817.272758-12-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Detach event channels during drained sections to stop I/O submission from the ring. xen-block is no longer reliant on aio_disable_external() after this patch. This will allow us to remove the aio_disable_external() API once all other code that relies on it is converted. Extend xen_device_set_event_channel_context() to allow ctx=NULL. The event channel still exists but the event loop does not monitor the file descriptor. Event channel processing can resume by calling xen_device_set_event_channel_context() with a non-NULL ctx. Factor out xen_device_set_event_channel_context() calls in hw/block/dataplane/xen-block.c into attach/detach helper functions. Incidentally, these don't require the AioContext lock because aio_set_fd_handler() is thread-safe. It's safer to register BlockDevOps after the dataplane instance has been created. The BlockDevOps .drained_begin/end() callbacks depend on the dataplane instance, so move the blk_set_dev_ops() call after xen_block_dataplane_create(). Signed-off-by: Stefan Hajnoczi --- hw/block/dataplane/xen-block.h | 2 ++ hw/block/dataplane/xen-block.c | 42 +++++++++++++++++++++++++--------- hw/block/xen-block.c | 24 ++++++++++++++++--- hw/xen/xen-bus.c | 7 ++++-- 4 files changed, 59 insertions(+), 16 deletions(-) diff --git a/hw/block/dataplane/xen-block.h b/hw/block/dataplane/xen-block.h index 76dcd51c3d..7b8e9df09f 100644 --- a/hw/block/dataplane/xen-block.h +++ b/hw/block/dataplane/xen-block.h @@ -26,5 +26,7 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane, unsigned int protocol, Error **errp); void xen_block_dataplane_stop(XenBlockDataPlane *dataplane); +void xen_block_dataplane_attach(XenBlockDataPlane *dataplane); +void xen_block_dataplane_detach(XenBlockDataPlane *dataplane); #endif /* HW_BLOCK_DATAPLANE_XEN_BLOCK_H */ diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c index 734da42ea7..02e0fd6115 100644 --- a/hw/block/dataplane/xen-block.c +++ b/hw/block/dataplane/xen-block.c @@ -663,6 +663,30 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane) g_free(dataplane); } +void xen_block_dataplane_detach(XenBlockDataPlane *dataplane) +{ + if (!dataplane || !dataplane->event_channel) { + return; + } + + /* Only reason for failure is a NULL channel */ + xen_device_set_event_channel_context(dataplane->xendev, + dataplane->event_channel, + NULL, &error_abort); +} + +void xen_block_dataplane_attach(XenBlockDataPlane *dataplane) +{ + if (!dataplane || !dataplane->event_channel) { + return; + } + + /* Only reason for failure is a NULL channel */ + xen_device_set_event_channel_context(dataplane->xendev, + dataplane->event_channel, + dataplane->ctx, &error_abort); +} + void xen_block_dataplane_stop(XenBlockDataPlane *dataplane) { XenDevice *xendev; @@ -673,13 +697,11 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane) xendev = dataplane->xendev; - aio_context_acquire(dataplane->ctx); - if (dataplane->event_channel) { - /* Only reason for failure is a NULL channel */ - xen_device_set_event_channel_context(xendev, dataplane->event_channel, - qemu_get_aio_context(), - &error_abort); + if (!blk_in_drain(dataplane->blk)) { + xen_block_dataplane_detach(dataplane); } + + aio_context_acquire(dataplane->ctx); /* Xen doesn't have multiple users for nodes, so this can't fail */ blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort); aio_context_release(dataplane->ctx); @@ -818,11 +840,9 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane, blk_set_aio_context(dataplane->blk, dataplane->ctx, NULL); aio_context_release(old_context); - /* Only reason for failure is a NULL channel */ - aio_context_acquire(dataplane->ctx); - xen_device_set_event_channel_context(xendev, dataplane->event_channel, - dataplane->ctx, &error_abort); - aio_context_release(dataplane->ctx); + if (!blk_in_drain(dataplane->blk)) { + xen_block_dataplane_attach(dataplane); + } return; diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c index f5a744589d..f099914831 100644 --- a/hw/block/xen-block.c +++ b/hw/block/xen-block.c @@ -189,8 +189,26 @@ static void xen_block_resize_cb(void *opaque) xen_device_backend_printf(xendev, "state", "%u", state); } +/* Suspend request handling */ +static void xen_block_drained_begin(void *opaque) +{ + XenBlockDevice *blockdev = opaque; + + xen_block_dataplane_detach(blockdev->dataplane); +} + +/* Resume request handling */ +static void xen_block_drained_end(void *opaque) +{ + XenBlockDevice *blockdev = opaque; + + xen_block_dataplane_attach(blockdev->dataplane); +} + static const BlockDevOps xen_block_dev_ops = { - .resize_cb = xen_block_resize_cb, + .resize_cb = xen_block_resize_cb, + .drained_begin = xen_block_drained_begin, + .drained_end = xen_block_drained_end, }; static void xen_block_realize(XenDevice *xendev, Error **errp) @@ -242,8 +260,6 @@ static void xen_block_realize(XenDevice *xendev, Error **errp) return; } - blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev); - if (conf->discard_granularity == -1) { conf->discard_granularity = conf->physical_block_size; } @@ -277,6 +293,8 @@ static void xen_block_realize(XenDevice *xendev, Error **errp) blockdev->dataplane = xen_block_dataplane_create(xendev, blk, conf->logical_block_size, blockdev->props.iothread); + + blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev); } static void xen_block_frontend_changed(XenDevice *xendev, diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c index c59850b1de..b8f408c9ed 100644 --- a/hw/xen/xen-bus.c +++ b/hw/xen/xen-bus.c @@ -846,8 +846,11 @@ void xen_device_set_event_channel_context(XenDevice *xendev, NULL, NULL, NULL, NULL, NULL); channel->ctx = ctx; - aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true, - xen_device_event, NULL, xen_device_poll, NULL, channel); + if (ctx) { + aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), + true, xen_device_event, NULL, xen_device_poll, NULL, + channel); + } } XenEventChannel *xen_device_bind_event_channel(XenDevice *xendev, From patchwork Wed Apr 19 17:28:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217204 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2AEE4C7EE20 for ; Wed, 19 Apr 2023 17:39:14 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523670.813984 (Exim 4.92) (envelope-from ) id 1ppBlk-0008Pk-O3; Wed, 19 Apr 2023 17:39:00 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523670.813984; Wed, 19 Apr 2023 17:39:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBlk-0008PO-Ec; Wed, 19 Apr 2023 17:39:00 +0000 Received: by outflank-mailman (input) for mailman id 523670; Wed, 19 Apr 2023 17:38:58 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBcL-0001ia-He for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:29:17 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id b53042d3-ded7-11ed-b21f-6b7b168915f2; Wed, 19 Apr 2023 19:29:16 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-615-GPguznyEPGaF7KXF2UnWHQ-1; Wed, 19 Apr 2023 13:29:12 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5554B3801F5C; Wed, 19 Apr 2023 17:29:11 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id 462742026D16; Wed, 19 Apr 2023 17:29:09 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b53042d3-ded7-11ed-b21f-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925355; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=l1BfjF5WOabKn8yxsLU/Ixv2u9YtxHZYDFF52BuLnDA=; b=gQVtIVBw6ET3cfMWOrRuDOfFgt87/WReAZzNzBBWFRLDy8L1DtVIEMs1ngB/Lkqr60+sJi +ZjiQX9Q2guXRl+Yrso2qoLz+hwtboka8K/J0rgZBle+iVoRJa7sqVn5J/+gNv1tdw5/1l 6o0g/o+hTe0ZgYp5IMKHW8bMJRBjhhQ= X-MC-Unique: GPguznyEPGaF7KXF2UnWHQ-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse Subject: [PATCH v2 12/16] hw/xen: do not set is_external=true on evtchn fds Date: Wed, 19 Apr 2023 13:28:13 -0400 Message-Id: <20230419172817.272758-13-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 is_external=true suspends fd handlers between aio_disable_external() and aio_enable_external(). The block layer's drain operation uses this mechanism to prevent new I/O from sneaking in between bdrv_drained_begin() and bdrv_drained_end(). The previous commit converted the xen-block device to use BlockDevOps .drained_begin/end() callbacks. It no longer relies on is_external=true so it is safe to pass is_external=false. This is part of ongoing work to remove the aio_disable_external() API. Signed-off-by: Stefan Hajnoczi --- hw/xen/xen-bus.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c index b8f408c9ed..bf256d4da2 100644 --- a/hw/xen/xen-bus.c +++ b/hw/xen/xen-bus.c @@ -842,14 +842,14 @@ void xen_device_set_event_channel_context(XenDevice *xendev, } if (channel->ctx) - aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true, + aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false, NULL, NULL, NULL, NULL, NULL); channel->ctx = ctx; if (ctx) { aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), - true, xen_device_event, NULL, xen_device_poll, NULL, - channel); + false, xen_device_event, NULL, xen_device_poll, + NULL, channel); } } @@ -923,7 +923,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev, QLIST_REMOVE(channel, list); - aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true, + aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false, NULL, NULL, NULL, NULL, NULL); if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) { From patchwork Wed Apr 19 17:28:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217206 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 09E1AC77B78 for ; Wed, 19 Apr 2023 17:39:14 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523672.813992 (Exim 4.92) (envelope-from ) id 1ppBll-00006g-AP; Wed, 19 Apr 2023 17:39:01 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523672.813992; Wed, 19 Apr 2023 17:39:01 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBll-0008VQ-01; Wed, 19 Apr 2023 17:39:01 +0000 Received: by outflank-mailman (input) for mailman id 523672; Wed, 19 Apr 2023 17:38:59 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBcS-00036z-4g for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:29:24 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id b7ecc5cb-ded7-11ed-8611-37d641c3527e; Wed, 19 Apr 2023 19:29:21 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-131-E-rqK67lM7ud5XIt6PKv0Q-1; Wed, 19 Apr 2023 13:29:14 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DC4C9101A531; Wed, 19 Apr 2023 17:29:13 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id 41B9740BC798; Wed, 19 Apr 2023 17:29:12 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b7ecc5cb-ded7-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925360; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Kvgpibm7uG8By4vHBOsqesPFxvQEinDBgjjk4WE12qY=; b=c1zupwSmsln/Eeac95ycfoim13UmKYh85VP0gxXovSb2NwWWzGjj9SEam+QiCoh0DcRDPQ nbif/jDNfRPMF8+FR3M6I9WU6E2YPjHt5svK2oyazcE16dowSlmsonfqvswJ+5eunl3eDa uXejSZ2f8DK5Pj3wbwJ+D97SlVA6wb4= X-MC-Unique: E-rqK67lM7ud5XIt6PKv0Q-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse Subject: [PATCH v2 13/16] block/export: rewrite vduse-blk drain code Date: Wed, 19 Apr 2023 13:28:14 -0400 Message-Id: <20230419172817.272758-14-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 vduse_blk_detach_ctx() waits for in-flight requests using AIO_WAIT_WHILE(). This is not allowed according to a comment in bdrv_set_aio_context_commit(): /* * Take the old AioContex when detaching it from bs. * At this point, new_context lock is already acquired, and we are now * also taking old_context. This is safe as long as bdrv_detach_aio_context * does not call AIO_POLL_WHILE(). */ Use this opportunity to rewrite the drain code in vduse-blk: - Use the BlockExport refcount so that vduse_blk_exp_delete() is only called when there are no more requests in flight. - Implement .drained_poll() so in-flight request coroutines are stopped by the time .bdrv_detach_aio_context() is called. - Remove AIO_WAIT_WHILE() from vduse_blk_detach_ctx() to solve the .bdrv_detach_aio_context() constraint violation. It's no longer needed due to the previous changes. - Always handle the VDUSE file descriptor, even in drained sections. The VDUSE file descriptor doesn't submit I/O, so it's safe to handle it in drained sections. This ensures that the VDUSE kernel code gets a fast response. - Suspend virtqueue fd handlers in .drained_begin() and resume them in .drained_end(). This eliminates the need for the aio_set_fd_handler(is_external=true) flag, which is being removed from QEMU. This is a long list but splitting it into individual commits would probably lead to git bisect failures - the changes are all related. Signed-off-by: Stefan Hajnoczi --- block/export/vduse-blk.c | 132 +++++++++++++++++++++++++++------------ 1 file changed, 93 insertions(+), 39 deletions(-) diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c index f7ae44e3ce..35dc8fcf45 100644 --- a/block/export/vduse-blk.c +++ b/block/export/vduse-blk.c @@ -31,7 +31,8 @@ typedef struct VduseBlkExport { VduseDev *dev; uint16_t num_queues; char *recon_file; - unsigned int inflight; + unsigned int inflight; /* atomic */ + bool vqs_started; } VduseBlkExport; typedef struct VduseBlkReq { @@ -41,13 +42,24 @@ typedef struct VduseBlkReq { static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp) { - vblk_exp->inflight++; + if (qatomic_fetch_inc(&vblk_exp->inflight) == 0) { + /* Prevent export from being deleted */ + aio_context_acquire(vblk_exp->export.ctx); + blk_exp_ref(&vblk_exp->export); + aio_context_release(vblk_exp->export.ctx); + } } static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp) { - if (--vblk_exp->inflight == 0) { + if (qatomic_fetch_dec(&vblk_exp->inflight) == 1) { + /* Wake AIO_WAIT_WHILE() */ aio_wait_kick(); + + /* Now the export can be deleted */ + aio_context_acquire(vblk_exp->export.ctx); + blk_exp_unref(&vblk_exp->export); + aio_context_release(vblk_exp->export.ctx); } } @@ -124,8 +136,12 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq) { VduseBlkExport *vblk_exp = vduse_dev_get_priv(dev); + if (!vblk_exp->vqs_started) { + return; /* vduse_blk_drained_end() will start vqs later */ + } + aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq), - true, on_vduse_vq_kick, NULL, NULL, NULL, vq); + false, on_vduse_vq_kick, NULL, NULL, NULL, vq); /* Make sure we don't miss any kick afer reconnecting */ eventfd_write(vduse_queue_get_fd(vq), 1); } @@ -133,9 +149,14 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq) static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq) { VduseBlkExport *vblk_exp = vduse_dev_get_priv(dev); + int fd = vduse_queue_get_fd(vq); - aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq), - true, NULL, NULL, NULL, NULL, NULL); + if (fd < 0) { + return; + } + + aio_set_fd_handler(vblk_exp->export.ctx, fd, false, + NULL, NULL, NULL, NULL, NULL); } static const VduseOps vduse_blk_ops = { @@ -152,42 +173,19 @@ static void on_vduse_dev_kick(void *opaque) static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx) { - int i; - aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev), - true, on_vduse_dev_kick, NULL, NULL, NULL, + false, on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev); - for (i = 0; i < vblk_exp->num_queues; i++) { - VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i); - int fd = vduse_queue_get_fd(vq); - - if (fd < 0) { - continue; - } - aio_set_fd_handler(vblk_exp->export.ctx, fd, true, - on_vduse_vq_kick, NULL, NULL, NULL, vq); - } + /* Virtqueues are handled by vduse_blk_drained_end() */ } static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp) { - int i; - - for (i = 0; i < vblk_exp->num_queues; i++) { - VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i); - int fd = vduse_queue_get_fd(vq); - - if (fd < 0) { - continue; - } - aio_set_fd_handler(vblk_exp->export.ctx, fd, - true, NULL, NULL, NULL, NULL, NULL); - } aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev), - true, NULL, NULL, NULL, NULL, NULL); + false, NULL, NULL, NULL, NULL, NULL); - AIO_WAIT_WHILE(vblk_exp->export.ctx, vblk_exp->inflight > 0); + /* Virtqueues are handled by vduse_blk_drained_begin() */ } @@ -220,8 +218,55 @@ static void vduse_blk_resize(void *opaque) (char *)&config.capacity); } +static void vduse_blk_stop_virtqueues(VduseBlkExport *vblk_exp) +{ + for (uint16_t i = 0; i < vblk_exp->num_queues; i++) { + VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i); + vduse_blk_disable_queue(vblk_exp->dev, vq); + } + + vblk_exp->vqs_started = false; +} + +static void vduse_blk_start_virtqueues(VduseBlkExport *vblk_exp) +{ + vblk_exp->vqs_started = true; + + for (uint16_t i = 0; i < vblk_exp->num_queues; i++) { + VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i); + vduse_blk_enable_queue(vblk_exp->dev, vq); + } +} + +static void vduse_blk_drained_begin(void *opaque) +{ + BlockExport *exp = opaque; + VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export); + + vduse_blk_stop_virtqueues(vblk_exp); +} + +static void vduse_blk_drained_end(void *opaque) +{ + BlockExport *exp = opaque; + VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export); + + vduse_blk_start_virtqueues(vblk_exp); +} + +static bool vduse_blk_drained_poll(void *opaque) +{ + BlockExport *exp = opaque; + VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export); + + return qatomic_read(&vblk_exp->inflight) > 0; +} + static const BlockDevOps vduse_block_ops = { - .resize_cb = vduse_blk_resize, + .resize_cb = vduse_blk_resize, + .drained_begin = vduse_blk_drained_begin, + .drained_end = vduse_blk_drained_end, + .drained_poll = vduse_blk_drained_poll, }; static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts, @@ -268,6 +313,7 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts, vblk_exp->handler.serial = g_strdup(vblk_opts->serial ?: ""); vblk_exp->handler.logical_block_size = logical_block_size; vblk_exp->handler.writable = opts->writable; + vblk_exp->vqs_started = true; config.capacity = cpu_to_le64(blk_getlength(exp->blk) >> VIRTIO_BLK_SECTOR_BITS); @@ -322,14 +368,20 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts, vduse_dev_setup_queue(vblk_exp->dev, i, queue_size); } - aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), true, + aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false, on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev); blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach, vblk_exp); - blk_set_dev_ops(exp->blk, &vduse_block_ops, exp); + /* + * We handle draining ourselves using an in-flight counter and by disabling + * virtqueue fd handlers. Do not queue BlockBackend requests, they need to + * complete so the in-flight counter reaches zero. + */ + blk_set_disable_request_queuing(exp->blk, true); + return 0; err: vduse_dev_destroy(vblk_exp->dev); @@ -344,6 +396,9 @@ static void vduse_blk_exp_delete(BlockExport *exp) VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export); int ret; + assert(qatomic_read(&vblk_exp->inflight) == 0); + + vduse_blk_detach_ctx(vblk_exp); blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach, vblk_exp); blk_set_dev_ops(exp->blk, NULL, NULL); @@ -355,13 +410,12 @@ static void vduse_blk_exp_delete(BlockExport *exp) g_free(vblk_exp->handler.serial); } +/* Called with exp->ctx acquired */ static void vduse_blk_exp_request_shutdown(BlockExport *exp) { VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export); - aio_context_acquire(vblk_exp->export.ctx); - vduse_blk_detach_ctx(vblk_exp); - aio_context_acquire(vblk_exp->export.ctx); + vduse_blk_stop_virtqueues(vblk_exp); } const BlockExportDriver blk_exp_vduse_blk = { From patchwork Wed Apr 19 17:28:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217205 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 07DEDC77B7A for ; Wed, 19 Apr 2023 17:39:13 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523676.814018 (Exim 4.92) (envelope-from ) id 1ppBln-0000d4-5R; Wed, 19 Apr 2023 17:39:03 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523676.814018; Wed, 19 Apr 2023 17:39:03 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBlm-0000aE-Kj; Wed, 19 Apr 2023 17:39:02 +0000 Received: by outflank-mailman (input) for mailman id 523676; Wed, 19 Apr 2023 17:38:59 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBcR-00036z-4h for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:29:23 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id b7ffd628-ded7-11ed-8611-37d641c3527e; Wed, 19 Apr 2023 19:29:21 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-263-whon3Qt8MtqXVgvzgaJ_Ug-1; Wed, 19 Apr 2023 13:29:17 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5340285A5B1; Wed, 19 Apr 2023 17:29:16 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9494440C2064; Wed, 19 Apr 2023 17:29:15 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b7ffd628-ded7-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925360; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=svW5Iqq6ywfg5XGxDCHKSniKJYjg3etTmQ7BLHFrTFw=; b=bJ5eK0foYwZXHicu/RQwVMa+0Kc3xER1cfViU5+dVZz1Ou018pGa6YD0BGU5W8a1tuDxAy s/tkL4BOJD+Qgy5XodMygaSfUX7qOEIIqtNQv65I2lQh6ZjkIcCcAm9NEO1cj0E1I2Vn5x zz7JinxcSLT4B39hYq8Xz0404G+7YlE= X-MC-Unique: whon3Qt8MtqXVgvzgaJ_Ug-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse Subject: [PATCH v2 14/16] block/export: don't require AioContext lock around blk_exp_ref/unref() Date: Wed, 19 Apr 2023 13:28:15 -0400 Message-Id: <20230419172817.272758-15-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 The FUSE export calls blk_exp_ref/unref() without the AioContext lock. Instead of fixing the FUSE export, adjust blk_exp_ref/unref() so they work without the AioContext lock. This way it's less error-prone. Suggested-by: Paolo Bonzini Signed-off-by: Stefan Hajnoczi --- include/block/export.h | 2 ++ block/export/export.c | 13 ++++++------- block/export/vduse-blk.c | 4 ---- 3 files changed, 8 insertions(+), 11 deletions(-) diff --git a/include/block/export.h b/include/block/export.h index 7feb02e10d..f2fe0f8078 100644 --- a/include/block/export.h +++ b/include/block/export.h @@ -57,6 +57,8 @@ struct BlockExport { * Reference count for this block export. This includes strong references * both from the owner (qemu-nbd or the monitor) and clients connected to * the export. + * + * Use atomics to access this field. */ int refcount; diff --git a/block/export/export.c b/block/export/export.c index e3fee60611..edb05c9268 100644 --- a/block/export/export.c +++ b/block/export/export.c @@ -201,11 +201,10 @@ fail: return NULL; } -/* Callers must hold exp->ctx lock */ void blk_exp_ref(BlockExport *exp) { - assert(exp->refcount > 0); - exp->refcount++; + assert(qatomic_read(&exp->refcount) > 0); + qatomic_inc(&exp->refcount); } /* Runs in the main thread */ @@ -227,11 +226,10 @@ static void blk_exp_delete_bh(void *opaque) aio_context_release(aio_context); } -/* Callers must hold exp->ctx lock */ void blk_exp_unref(BlockExport *exp) { - assert(exp->refcount > 0); - if (--exp->refcount == 0) { + assert(qatomic_read(&exp->refcount) > 0); + if (qatomic_fetch_dec(&exp->refcount) == 1) { /* Touch the block_exports list only in the main thread */ aio_bh_schedule_oneshot(qemu_get_aio_context(), blk_exp_delete_bh, exp); @@ -339,7 +337,8 @@ void qmp_block_export_del(const char *id, if (!has_mode) { mode = BLOCK_EXPORT_REMOVE_MODE_SAFE; } - if (mode == BLOCK_EXPORT_REMOVE_MODE_SAFE && exp->refcount > 1) { + if (mode == BLOCK_EXPORT_REMOVE_MODE_SAFE && + qatomic_read(&exp->refcount) > 1) { error_setg(errp, "export '%s' still in use", exp->id); error_append_hint(errp, "Use mode='hard' to force client " "disconnect\n"); diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c index 35dc8fcf45..611430afda 100644 --- a/block/export/vduse-blk.c +++ b/block/export/vduse-blk.c @@ -44,9 +44,7 @@ static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp) { if (qatomic_fetch_inc(&vblk_exp->inflight) == 0) { /* Prevent export from being deleted */ - aio_context_acquire(vblk_exp->export.ctx); blk_exp_ref(&vblk_exp->export); - aio_context_release(vblk_exp->export.ctx); } } @@ -57,9 +55,7 @@ static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp) aio_wait_kick(); /* Now the export can be deleted */ - aio_context_acquire(vblk_exp->export.ctx); blk_exp_unref(&vblk_exp->export); - aio_context_release(vblk_exp->export.ctx); } } From patchwork Wed Apr 19 17:28:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217208 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 55A90C7EE21 for ; Wed, 19 Apr 2023 17:39:14 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523671.813988 (Exim 4.92) (envelope-from ) id 1ppBll-0008S3-07; Wed, 19 Apr 2023 17:39:01 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523671.813988; Wed, 19 Apr 2023 17:39:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBlk-0008QZ-Ls; Wed, 19 Apr 2023 17:39:00 +0000 Received: by outflank-mailman (input) for mailman id 523671; Wed, 19 Apr 2023 17:38:59 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBcT-00036z-56 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:29:25 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id b89e3c4e-ded7-11ed-8611-37d641c3527e; Wed, 19 Apr 2023 19:29:22 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-441-nt5MpswVMfyrsieZJncL0w-1; Wed, 19 Apr 2023 13:29:19 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AD2C71C189A7; Wed, 19 Apr 2023 17:29:18 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id 10F09C16024; Wed, 19 Apr 2023 17:29:17 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b89e3c4e-ded7-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925361; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AI5kqkdzipCjtseeQPE5/aVyAJxrbcrgigA/+qZzi/8=; b=KpRyrFruTGvEL+gAea8wNrOCH9N0EyfTVmy7JCe2Aqy4RFAjjB9HQmmdJ2Ilhh9B1/2IQd W1C5XGAc/1YVUAAhZoXDJ0sisv6EerlfC1SJuMQ9znA4pzHoGRBSTo3Iy3K133x5y9m5Qp t3qmNAZmb/Ys6xcWYzAOvB2xwS6+n+8= X-MC-Unique: nt5MpswVMfyrsieZJncL0w-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse Subject: [PATCH v2 15/16] block/fuse: do not set is_external=true on FUSE fd Date: Wed, 19 Apr 2023 13:28:16 -0400 Message-Id: <20230419172817.272758-16-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 This is part of ongoing work to remove the aio_disable_external() API. Use BlockDevOps .drained_begin/end/poll() instead of aio_set_fd_handler(is_external=true). As a side-effect the FUSE export now follows AioContext changes like the other export types. Signed-off-by: Stefan Hajnoczi --- block/export/fuse.c | 58 +++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 56 insertions(+), 2 deletions(-) diff --git a/block/export/fuse.c b/block/export/fuse.c index 06fa41079e..65a7f4d723 100644 --- a/block/export/fuse.c +++ b/block/export/fuse.c @@ -50,6 +50,7 @@ typedef struct FuseExport { struct fuse_session *fuse_session; struct fuse_buf fuse_buf; + unsigned int in_flight; /* atomic */ bool mounted, fd_handler_set_up; char *mountpoint; @@ -78,6 +79,42 @@ static void read_from_fuse_export(void *opaque); static bool is_regular_file(const char *path, Error **errp); +static void fuse_export_drained_begin(void *opaque) +{ + FuseExport *exp = opaque; + + aio_set_fd_handler(exp->common.ctx, + fuse_session_fd(exp->fuse_session), false, + NULL, NULL, NULL, NULL, NULL); + exp->fd_handler_set_up = false; +} + +static void fuse_export_drained_end(void *opaque) +{ + FuseExport *exp = opaque; + + /* Refresh AioContext in case it changed */ + exp->common.ctx = blk_get_aio_context(exp->common.blk); + + aio_set_fd_handler(exp->common.ctx, + fuse_session_fd(exp->fuse_session), false, + read_from_fuse_export, NULL, NULL, NULL, exp); + exp->fd_handler_set_up = true; +} + +static bool fuse_export_drained_poll(void *opaque) +{ + FuseExport *exp = opaque; + + return qatomic_read(&exp->in_flight) > 0; +} + +static const BlockDevOps fuse_export_blk_dev_ops = { + .drained_begin = fuse_export_drained_begin, + .drained_end = fuse_export_drained_end, + .drained_poll = fuse_export_drained_poll, +}; + static int fuse_export_create(BlockExport *blk_exp, BlockExportOptions *blk_exp_args, Error **errp) @@ -101,6 +138,15 @@ static int fuse_export_create(BlockExport *blk_exp, } } + blk_set_dev_ops(exp->common.blk, &fuse_export_blk_dev_ops, exp); + + /* + * We handle draining ourselves using an in-flight counter and by disabling + * the FUSE fd handler. Do not queue BlockBackend requests, they need to + * complete so the in-flight counter reaches zero. + */ + blk_set_disable_request_queuing(exp->common.blk, true); + init_exports_table(); /* @@ -224,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const char *mountpoint, g_hash_table_insert(exports, g_strdup(mountpoint), NULL); aio_set_fd_handler(exp->common.ctx, - fuse_session_fd(exp->fuse_session), true, + fuse_session_fd(exp->fuse_session), false, read_from_fuse_export, NULL, NULL, NULL, exp); exp->fd_handler_set_up = true; @@ -246,6 +292,8 @@ static void read_from_fuse_export(void *opaque) blk_exp_ref(&exp->common); + qatomic_inc(&exp->in_flight); + do { ret = fuse_session_receive_buf(exp->fuse_session, &exp->fuse_buf); } while (ret == -EINTR); @@ -256,6 +304,10 @@ static void read_from_fuse_export(void *opaque) fuse_session_process_buf(exp->fuse_session, &exp->fuse_buf); out: + if (qatomic_fetch_dec(&exp->in_flight) == 1) { + aio_wait_kick(); /* wake AIO_WAIT_WHILE() */ + } + blk_exp_unref(&exp->common); } @@ -268,7 +320,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp) if (exp->fd_handler_set_up) { aio_set_fd_handler(exp->common.ctx, - fuse_session_fd(exp->fuse_session), true, + fuse_session_fd(exp->fuse_session), false, NULL, NULL, NULL, NULL, NULL); exp->fd_handler_set_up = false; } @@ -287,6 +339,8 @@ static void fuse_export_delete(BlockExport *blk_exp) { FuseExport *exp = container_of(blk_exp, FuseExport, common); + blk_set_dev_ops(exp->common.blk, NULL, NULL); + if (exp->fuse_session) { if (exp->mounted) { fuse_session_unmount(exp->fuse_session); From patchwork Wed Apr 19 17:28:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13217203 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EFA95C77B73 for ; Wed, 19 Apr 2023 17:39:13 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.523675.814013 (Exim 4.92) (envelope-from ) id 1ppBlm-0000UE-Pc; Wed, 19 Apr 2023 17:39:02 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 523675.814013; Wed, 19 Apr 2023 17:39:02 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBlm-0000Si-5e; Wed, 19 Apr 2023 17:39:02 +0000 Received: by outflank-mailman (input) for mailman id 523675; Wed, 19 Apr 2023 17:38:59 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppBcW-0001ia-0d for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:29:28 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id bb7d62a9-ded7-11ed-b21f-6b7b168915f2; Wed, 19 Apr 2023 19:29:27 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-457-UORK3zzVOxmLu7977uaTMQ-1; Wed, 19 Apr 2023 13:29:22 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 59D9D1C189A4; Wed, 19 Apr 2023 17:29:21 +0000 (UTC) Received: from localhost (unknown [10.39.192.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8045B40C2064; Wed, 19 Apr 2023 17:29:20 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bb7d62a9-ded7-11ed-b21f-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681925366; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Qs0Wu6hxc7j58DwPPRdDhx+qIxo2MCSBXMmfJ14jr7A=; b=P81Hbad2MkrxYTfi71uOxxkGaggxm+9W6bTi5GM2BTkPDcPFnK7pF6YXAlQOQkOq/8nV1O hWV0a02+t5ibJI9zvKkiL8RyS6ToCmoM4vuje/hz+FkZktCEZmssTifAfPzGl6DH0z5Wb/ GjxSdH06n467P6qYOIzjoaC5c7SG0UE= X-MC-Unique: UORK3zzVOxmLu7977uaTMQ-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Marcel Apfelbaum , Fam Zheng , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Paolo Bonzini , Coiby Xu , Paul Durrant , Ronnie Sahlberg , Eduardo Habkost , Juan Quintela , "Michael S. Tsirkin" , Stefano Garzarella , Anthony Perard , Kevin Wolf , "Richard W.M. Jones" , Richard Henderson , xen-devel@lists.xenproject.org, qemu-block@nongnu.org, "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Mathieu-?= =?utf-8?q?Daud=C3=A9?= , Peter Lieven , eesposit@redhat.com, Aarushi Mehta , Stefan Weil , Xie Yongji , David Woodhouse Subject: [PATCH v2 16/16] virtio: make it possible to detach host notifier from any thread Date: Wed, 19 Apr 2023 13:28:17 -0400 Message-Id: <20230419172817.272758-17-stefanha@redhat.com> In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com> References: <20230419172817.272758-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 virtio_queue_aio_detach_host_notifier() does two things: 1. It removes the fd handler from the event loop. 2. It processes the virtqueue one last time. The first step can be peformed by any thread and without taking the AioContext lock. The second step may need the AioContext lock (depending on the device implementation) and runs in the thread where request processing takes place. virtio-blk and virtio-scsi therefore call virtio_queue_aio_detach_host_notifier() from a BH that is scheduled in AioContext Scheduling a BH is undesirable for .drained_begin() functions. The next patch will introduce a .drained_begin() function that needs to call virtio_queue_aio_detach_host_notifier(). Move the virtqueue processing out to the callers of virtio_queue_aio_detach_host_notifier() so that the function can be called from any thread. This is in preparation for the next patch. Signed-off-by: Stefan Hajnoczi --- hw/block/dataplane/virtio-blk.c | 2 ++ hw/scsi/virtio-scsi-dataplane.c | 9 +++++++++ 2 files changed, 11 insertions(+) diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index b28d81737e..bd7cc6e76b 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -286,8 +286,10 @@ static void virtio_blk_data_plane_stop_bh(void *opaque) for (i = 0; i < s->conf->num_queues; i++) { VirtQueue *vq = virtio_get_queue(s->vdev, i); + EventNotifier *host_notifier = virtio_queue_get_host_notifier(vq); virtio_queue_aio_detach_host_notifier(vq, s->ctx); + virtio_queue_host_notifier_read(host_notifier); } } diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c index 20bb91766e..81643445ed 100644 --- a/hw/scsi/virtio-scsi-dataplane.c +++ b/hw/scsi/virtio-scsi-dataplane.c @@ -71,12 +71,21 @@ static void virtio_scsi_dataplane_stop_bh(void *opaque) { VirtIOSCSI *s = opaque; VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s); + EventNotifier *host_notifier; int i; virtio_queue_aio_detach_host_notifier(vs->ctrl_vq, s->ctx); + host_notifier = virtio_queue_get_host_notifier(vs->ctrl_vq); + virtio_queue_host_notifier_read(host_notifier); + virtio_queue_aio_detach_host_notifier(vs->event_vq, s->ctx); + host_notifier = virtio_queue_get_host_notifier(vs->event_vq); + virtio_queue_host_notifier_read(host_notifier); + for (i = 0; i < vs->conf.num_queues; i++) { virtio_queue_aio_detach_host_notifier(vs->cmd_vqs[i], s->ctx); + host_notifier = virtio_queue_get_host_notifier(vs->cmd_vqs[i]); + virtio_queue_host_notifier_read(host_notifier); } }