From patchwork Tue Apr 25 17:27:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13223570 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 354A5C77B61 for ; Tue, 25 Apr 2023 17:29:19 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.526216.817848 (Exim 4.92) (envelope-from ) id 1prMTM-0006MM-Kj; Tue, 25 Apr 2023 17:29:00 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 526216.817848; Tue, 25 Apr 2023 17:29:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMTM-0006LE-F2; Tue, 25 Apr 2023 17:29:00 +0000 Received: by outflank-mailman (input) for mailman id 526216; Tue, 25 Apr 2023 17:28:59 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMST-0006fQ-JU for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:28:05 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 878b602f-e38e-11ed-b223-6b7b168915f2; Tue, 25 Apr 2023 19:28:02 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-135-xhkpTfX3OsmcGVRiBiaH6Q-1; Tue, 25 Apr 2023 13:27:58 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2607C1C02C9C; Tue, 25 Apr 2023 17:27:57 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9478314171B8; Tue, 25 Apr 2023 17:27:56 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 878b602f-e38e-11ed-b223-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443681; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NTRadM71WOmRghvzBom8j3O5LrKwHmY9JQuXgfzMJZs=; b=Eo/BY+5G4tvepPg7/VqkGl66Ffv7ln2B9qN0qWNDhg7YHzIIYGcp15WFkSIDMm8Getfjmh UdF9L0OLgtkPuN92EVuxhFTCoPRXEdxThsJzyMA1AHBmts9IYIRMe6GqHt8QOpXZxaPQKJ KGYWKOLWpkA8+wB7aD7laMgUiE/8BAc= X-MC-Unique: xhkpTfX3OsmcGVRiBiaH6Q-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 17/20] virtio-blk: implement BlockDevOps->drained_begin() Date: Tue, 25 Apr 2023 13:27:13 -0400 Message-Id: <20230425172716.1033562-18-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 Detach ioeventfds during drained sections to stop I/O submission from the guest. virtio-blk is no longer reliant on aio_disable_external() after this patch. This will allow us to remove the aio_disable_external() API once all other code that relies on it is converted. Take extra care to avoid attaching/detaching ioeventfds if the data plane is started/stopped during a drained section. This should be rare, but maybe the mirror block job can trigger it. Signed-off-by: Stefan Hajnoczi --- hw/block/dataplane/virtio-blk.c | 17 +++++++++------ hw/block/virtio-blk.c | 38 ++++++++++++++++++++++++++++++++- 2 files changed, 48 insertions(+), 7 deletions(-) diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index bd7cc6e76b..d77fc6028c 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -245,13 +245,15 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev) } /* Get this show started by hooking up our callbacks */ - aio_context_acquire(s->ctx); - for (i = 0; i < nvqs; i++) { - VirtQueue *vq = virtio_get_queue(s->vdev, i); + if (!blk_in_drain(s->conf->conf.blk)) { + aio_context_acquire(s->ctx); + for (i = 0; i < nvqs; i++) { + VirtQueue *vq = virtio_get_queue(s->vdev, i); - virtio_queue_aio_attach_host_notifier(vq, s->ctx); + virtio_queue_aio_attach_host_notifier(vq, s->ctx); + } + aio_context_release(s->ctx); } - aio_context_release(s->ctx); return 0; fail_aio_context: @@ -317,7 +319,10 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev) trace_virtio_blk_data_plane_stop(s); aio_context_acquire(s->ctx); - aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s); + + if (!blk_in_drain(s->conf->conf.blk)) { + aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s); + } /* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete */ blk_drain(s->conf->conf.blk); diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index cefca93b31..d8dedc575c 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -1109,8 +1109,44 @@ static void virtio_blk_resize(void *opaque) aio_bh_schedule_oneshot(qemu_get_aio_context(), virtio_resize_cb, vdev); } +/* Suspend virtqueue ioeventfd processing during drain */ +static void virtio_blk_drained_begin(void *opaque) +{ + VirtIOBlock *s = opaque; + VirtIODevice *vdev = VIRTIO_DEVICE(opaque); + AioContext *ctx = blk_get_aio_context(s->conf.conf.blk); + + if (!s->dataplane || !s->dataplane_started) { + return; + } + + for (uint16_t i = 0; i < s->conf.num_queues; i++) { + VirtQueue *vq = virtio_get_queue(vdev, i); + virtio_queue_aio_detach_host_notifier(vq, ctx); + } +} + +/* Resume virtqueue ioeventfd processing after drain */ +static void virtio_blk_drained_end(void *opaque) +{ + VirtIOBlock *s = opaque; + VirtIODevice *vdev = VIRTIO_DEVICE(opaque); + AioContext *ctx = blk_get_aio_context(s->conf.conf.blk); + + if (!s->dataplane || !s->dataplane_started) { + return; + } + + for (uint16_t i = 0; i < s->conf.num_queues; i++) { + VirtQueue *vq = virtio_get_queue(vdev, i); + virtio_queue_aio_attach_host_notifier(vq, ctx); + } +} + static const BlockDevOps virtio_block_ops = { - .resize_cb = virtio_blk_resize, + .resize_cb = virtio_blk_resize, + .drained_begin = virtio_blk_drained_begin, + .drained_end = virtio_blk_drained_end, }; static void virtio_blk_device_realize(DeviceState *dev, Error **errp)