From patchwork Tue May 16 19:02:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13243529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 670B2C77B7A for ; Tue, 16 May 2023 19:03:29 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.535461.833233 (Exim 4.92) (envelope-from ) id 1pyzx9-0004eD-3B; Tue, 16 May 2023 19:03:19 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 535461.833233; Tue, 16 May 2023 19:03:19 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pyzx8-0004cv-SR; Tue, 16 May 2023 19:03:18 +0000 Received: by outflank-mailman (input) for mailman id 535461; Tue, 16 May 2023 19:03:17 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pyzx7-0002US-Pj for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:17 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 4f9e5af6-f41c-11ed-8611-37d641c3527e; Tue, 16 May 2023 21:03:16 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-411-tngGddgoOKCHUPg4actehQ-1; Tue, 16 May 2023 15:03:12 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 01BAE86C60F; Tue, 16 May 2023 19:03:11 +0000 (UTC) Received: from localhost (unknown [10.39.192.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6275740C2063; Tue, 16 May 2023 19:03:10 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4f9e5af6-f41c-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684263794; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wsN4h7xYRcCa/FyLGYJ9wKwMqFBp/AODhXKHex8caak=; b=jE0QFUJoR10bqbl3HtXGqotegD4PN+oTlu9z4tQ1AdY2aadHveQnISPoPPLIZFGF0H5fpg pUdhNq93o1me+knx46j2v/f8qKgMNO2oxudJ/OaZs3P/ceZYbxBnKsy4NIe1mPftKkyi9m yoEI4JRbs9x8CnqaDTOYGBs7JpdGBR0= X-MC-Unique: tngGddgoOKCHUPg4actehQ-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Coiby Xu , Marcel Apfelbaum , "Michael S. Tsirkin" , "Richard W.M. Jones" , Peter Xu , xen-devel@lists.xenproject.org, Kevin Wolf , Ronnie Sahlberg , Stefano Stabellini , Richard Henderson , Stefan Hajnoczi , Julia Suvorova , Hanna Reitz , Leonardo Bras , eesposit@redhat.com, Fam Zheng , Aarushi Mehta , David Woodhouse , Xie Yongji , Stefano Garzarella , qemu-block@nongnu.org, Eduardo Habkost , Paul Durrant , Stefan Weil , Anthony Perard , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Peter Lieven , Paolo Bonzini , =?utf-8?q?P?= =?utf-8?q?hilippe_Mathieu-Daud=C3=A9?= , Juan Quintela Subject: [PATCH v6 07/20] block/export: stop using is_external in vhost-user-blk server Date: Tue, 16 May 2023 15:02:25 -0400 Message-Id: <20230516190238.8401-8-stefanha@redhat.com> In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com> References: <20230516190238.8401-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 vhost-user activity must be suspended during bdrv_drained_begin/end(). This prevents new requests from interfering with whatever is happening in the drained section. Previously this was done using aio_set_fd_handler()'s is_external argument. In a multi-queue block layer world the aio_disable_external() API cannot be used since multiple AioContext may be processing I/O, not just one. Switch to BlockDevOps->drained_begin/end() callbacks. Signed-off-by: Stefan Hajnoczi Reviewed-by: Kevin Wolf --- block/export/vhost-user-blk-server.c | 28 ++++++++++++++++++++++++++-- util/vhost-user-server.c | 10 +++++----- 2 files changed, 31 insertions(+), 7 deletions(-) diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c index f51a36a14f..81b59761e3 100644 --- a/block/export/vhost-user-blk-server.c +++ b/block/export/vhost-user-blk-server.c @@ -212,15 +212,21 @@ static void blk_aio_attached(AioContext *ctx, void *opaque) { VuBlkExport *vexp = opaque; + /* + * The actual attach will happen in vu_blk_drained_end() and we just + * restore ctx here. + */ vexp->export.ctx = ctx; - vhost_user_server_attach_aio_context(&vexp->vu_server, ctx); } static void blk_aio_detach(void *opaque) { VuBlkExport *vexp = opaque; - vhost_user_server_detach_aio_context(&vexp->vu_server); + /* + * The actual detach already happened in vu_blk_drained_begin() but from + * this point on we must not access ctx anymore. + */ vexp->export.ctx = NULL; } @@ -272,6 +278,22 @@ static void vu_blk_exp_resize(void *opaque) vu_config_change_msg(&vexp->vu_server.vu_dev); } +/* Called with vexp->export.ctx acquired */ +static void vu_blk_drained_begin(void *opaque) +{ + VuBlkExport *vexp = opaque; + + vhost_user_server_detach_aio_context(&vexp->vu_server); +} + +/* Called with vexp->export.blk AioContext acquired */ +static void vu_blk_drained_end(void *opaque) +{ + VuBlkExport *vexp = opaque; + + vhost_user_server_attach_aio_context(&vexp->vu_server, vexp->export.ctx); +} + /* * Ensures that bdrv_drained_begin() waits until in-flight requests complete. * @@ -285,6 +307,8 @@ static bool vu_blk_drained_poll(void *opaque) } static const BlockDevOps vu_blk_dev_ops = { + .drained_begin = vu_blk_drained_begin, + .drained_end = vu_blk_drained_end, .drained_poll = vu_blk_drained_poll, .resize_cb = vu_blk_exp_resize, }; diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 68c3bf162f..a12b2d1bba 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt, vu_fd_watch->fd = fd; vu_fd_watch->cb = cb; qemu_socket_set_nonblock(fd); - aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler, + aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler, NULL, NULL, NULL, vu_fd_watch); vu_fd_watch->vu_dev = vu_dev; vu_fd_watch->pvt = pvt; @@ -299,7 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd) if (!vu_fd_watch) { return; } - aio_set_fd_handler(server->ioc->ctx, fd, true, + aio_set_fd_handler(server->ioc->ctx, fd, false, NULL, NULL, NULL, NULL, NULL); QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next); @@ -362,7 +362,7 @@ void vhost_user_server_stop(VuServer *server) VuFdWatch *vu_fd_watch; QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, NULL, NULL, NULL, NULL, vu_fd_watch); } @@ -403,7 +403,7 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx) qio_channel_attach_aio_context(server->ioc, ctx); QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(ctx, vu_fd_watch->fd, true, kick_handler, NULL, + aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL, NULL, NULL, vu_fd_watch); } @@ -417,7 +417,7 @@ void vhost_user_server_detach_aio_context(VuServer *server) VuFdWatch *vu_fd_watch; QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, NULL, NULL, NULL, NULL, vu_fd_watch); }