From patchwork Tue Dec 5 18:19:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13480571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6DEC9C4167B for ; Tue, 5 Dec 2023 18:20:43 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.648184.1012290 (Exim 4.92) (envelope-from ) id 1rAa26-0003FW-08; Tue, 05 Dec 2023 18:20:34 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 648184.1012290; Tue, 05 Dec 2023 18:20:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa25-0003FP-Sz; Tue, 05 Dec 2023 18:20:33 +0000 Received: by outflank-mailman (input) for mailman id 648184; Tue, 05 Dec 2023 18:20:32 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa24-0002wG-F9 for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:32 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id f86807a6-939a-11ee-9b0f-b553b5be7939; Tue, 05 Dec 2023 19:20:30 +0100 (CET) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-365-GF-QWva-Oni-7XU0aB-D9w-1; Tue, 05 Dec 2023 13:20:20 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4226585A597; Tue, 5 Dec 2023 18:20:18 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id 99C5B51E3; Tue, 5 Dec 2023 18:20:17 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f86807a6-939a-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800429; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PYzMzXb1PdcjP3A1qwHad/XAhWJKM4hT2cxMg6piGu4=; b=WCB28mTctlm3HwtP/GGotoNmvKb3jY7S4dCnfTsaXoi2R9QnZCGAeOSqUWsC5z1ZCuUGNb Mh7yJw/f52Pr13MI1Q4ZS5dcXKhCOZ0ShmJyr4cEJYbRjlcBqjNSM7xyTvVkpegipigXlt P4HrDlGa4qL4FJGZR3nnd9W5rFZAMtY= X-MC-Unique: GF-QWva-Oni-7XU0aB-D9w-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 01/14] virtio-scsi: replace AioContext lock with tmf_bh_lock Date: Tue, 5 Dec 2023 13:19:58 -0500 Message-ID: <20231205182011.1976568-2-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 Protect the Task Management Function BH state with a lock. The TMF BH runs in the main loop thread. An IOThread might process a TMF at the same time as the TMF BH is running. Therefore tmf_bh_list and tmf_bh must be protected by a lock. Run TMF request completion in the IOThread using aio_wait_bh_oneshot(). This avoids more locking to protect the virtqueue and SCSI layer state. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake Reviewed-by: Kevin Wolf --- include/hw/virtio/virtio-scsi.h | 3 +- hw/scsi/virtio-scsi.c | 62 ++++++++++++++++++++++----------- 2 files changed, 43 insertions(+), 22 deletions(-) diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h index 779568ab5d..da8cb928d9 100644 --- a/include/hw/virtio/virtio-scsi.h +++ b/include/hw/virtio/virtio-scsi.h @@ -85,8 +85,9 @@ struct VirtIOSCSI { /* * TMFs deferred to main loop BH. These fields are protected by - * virtio_scsi_acquire(). + * tmf_bh_lock. */ + QemuMutex tmf_bh_lock; QEMUBH *tmf_bh; QTAILQ_HEAD(, VirtIOSCSIReq) tmf_bh_list; diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 9c751bf296..4f8d35facc 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -123,6 +123,30 @@ static void virtio_scsi_complete_req(VirtIOSCSIReq *req) virtio_scsi_free_req(req); } +static void virtio_scsi_complete_req_bh(void *opaque) +{ + VirtIOSCSIReq *req = opaque; + + virtio_scsi_complete_req(req); +} + +/* + * Called from virtio_scsi_do_one_tmf_bh() in main loop thread. The main loop + * thread cannot touch the virtqueue since that could race with an IOThread. + */ +static void virtio_scsi_complete_req_from_main_loop(VirtIOSCSIReq *req) +{ + VirtIOSCSI *s = req->dev; + + if (!s->ctx || s->ctx == qemu_get_aio_context()) { + /* No need to schedule a BH when there is no IOThread */ + virtio_scsi_complete_req(req); + } else { + /* Run request completion in the IOThread */ + aio_wait_bh_oneshot(s->ctx, virtio_scsi_complete_req_bh, req); + } +} + static void virtio_scsi_bad_req(VirtIOSCSIReq *req) { virtio_error(VIRTIO_DEVICE(req->dev), "wrong size for virtio-scsi headers"); @@ -338,10 +362,7 @@ static void virtio_scsi_do_one_tmf_bh(VirtIOSCSIReq *req) out: object_unref(OBJECT(d)); - - virtio_scsi_acquire(s); - virtio_scsi_complete_req(req); - virtio_scsi_release(s); + virtio_scsi_complete_req_from_main_loop(req); } /* Some TMFs must be processed from the main loop thread */ @@ -354,18 +375,16 @@ static void virtio_scsi_do_tmf_bh(void *opaque) GLOBAL_STATE_CODE(); - virtio_scsi_acquire(s); + WITH_QEMU_LOCK_GUARD(&s->tmf_bh_lock) { + QTAILQ_FOREACH_SAFE(req, &s->tmf_bh_list, next, tmp) { + QTAILQ_REMOVE(&s->tmf_bh_list, req, next); + QTAILQ_INSERT_TAIL(&reqs, req, next); + } - QTAILQ_FOREACH_SAFE(req, &s->tmf_bh_list, next, tmp) { - QTAILQ_REMOVE(&s->tmf_bh_list, req, next); - QTAILQ_INSERT_TAIL(&reqs, req, next); + qemu_bh_delete(s->tmf_bh); + s->tmf_bh = NULL; } - qemu_bh_delete(s->tmf_bh); - s->tmf_bh = NULL; - - virtio_scsi_release(s); - QTAILQ_FOREACH_SAFE(req, &reqs, next, tmp) { QTAILQ_REMOVE(&reqs, req, next); virtio_scsi_do_one_tmf_bh(req); @@ -379,8 +398,7 @@ static void virtio_scsi_reset_tmf_bh(VirtIOSCSI *s) GLOBAL_STATE_CODE(); - virtio_scsi_acquire(s); - + /* Called after ioeventfd has been stopped, so tmf_bh_lock is not needed */ if (s->tmf_bh) { qemu_bh_delete(s->tmf_bh); s->tmf_bh = NULL; @@ -393,19 +411,19 @@ static void virtio_scsi_reset_tmf_bh(VirtIOSCSI *s) req->resp.tmf.response = VIRTIO_SCSI_S_TARGET_FAILURE; virtio_scsi_complete_req(req); } - - virtio_scsi_release(s); } static void virtio_scsi_defer_tmf_to_bh(VirtIOSCSIReq *req) { VirtIOSCSI *s = req->dev; - QTAILQ_INSERT_TAIL(&s->tmf_bh_list, req, next); + WITH_QEMU_LOCK_GUARD(&s->tmf_bh_lock) { + QTAILQ_INSERT_TAIL(&s->tmf_bh_list, req, next); - if (!s->tmf_bh) { - s->tmf_bh = qemu_bh_new(virtio_scsi_do_tmf_bh, s); - qemu_bh_schedule(s->tmf_bh); + if (!s->tmf_bh) { + s->tmf_bh = qemu_bh_new(virtio_scsi_do_tmf_bh, s); + qemu_bh_schedule(s->tmf_bh); + } } } @@ -1235,6 +1253,7 @@ static void virtio_scsi_device_realize(DeviceState *dev, Error **errp) Error *err = NULL; QTAILQ_INIT(&s->tmf_bh_list); + qemu_mutex_init(&s->tmf_bh_lock); virtio_scsi_common_realize(dev, virtio_scsi_handle_ctrl, @@ -1277,6 +1296,7 @@ static void virtio_scsi_device_unrealize(DeviceState *dev) qbus_set_hotplug_handler(BUS(&s->bus), NULL); virtio_scsi_common_unrealize(dev); + qemu_mutex_destroy(&s->tmf_bh_lock); } static Property virtio_scsi_properties[] = {