From patchwork Wed Nov 29 21:26:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13473447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3CEBEC07CB1 for ; Wed, 29 Nov 2023 21:27:09 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.644249.1005071 (Exim 4.92) (envelope-from ) id 1r8S56-0000m2-2K; Wed, 29 Nov 2023 21:26:52 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 644249.1005071; Wed, 29 Nov 2023 21:26:52 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r8S55-0000lv-Vs; Wed, 29 Nov 2023 21:26:51 +0000 Received: by outflank-mailman (input) for mailman id 644249; Wed, 29 Nov 2023 21:26:51 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r8S54-0000jZ-W5 for xen-devel@lists.xenproject.org; Wed, 29 Nov 2023 21:26:50 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 009e999b-8efe-11ee-9b0f-b553b5be7939; Wed, 29 Nov 2023 22:26:49 +0100 (CET) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-167-55r8XQSHMnC_rWqi5-T4vw-1; Wed, 29 Nov 2023 16:26:43 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 37B5C83DCD6; Wed, 29 Nov 2023 21:26:41 +0000 (UTC) Received: from localhost (unknown [10.39.192.91]) by smtp.corp.redhat.com (Postfix) with ESMTP id EE2DB2026D66; Wed, 29 Nov 2023 21:26:39 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 009e999b-8efe-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701293207; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DG7+f00gCNxrB4UkczfVTSYAisk/aHQSNhsUg/07D1c=; b=B7c5wAM1/bWOy42gsnsuG/h7vuN7+qRD52vLHMJ3FRiL5HDyNLjkbU3ZJmThO1TZ+YTNw1 AXFP/FLWDs8VK9hPgXB4bNO7QGYj0QTnhzL1s6hiaDzOWFA+7LzRwmz/brPIzBtX/WuEu6 +3/BPc4np2o1EUNDHdx8QrN9mgOSbvQ= X-MC-Unique: 55r8XQSHMnC_rWqi5-T4vw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Jean-Christophe Dubois , Fabiano Rosas , qemu-s390x@nongnu.org, Song Gao , Marcel Apfelbaum , Thomas Huth , Hyman Huang , Marcelo Tosatti , David Woodhouse , Andrey Smirnov , Peter Maydell , Kevin Wolf , Ilya Leoshkevich , Artyom Tarasenko , Mark Cave-Ayland , Max Filippov , Alistair Francis , Paul Durrant , Jagannathan Raman , Juan Quintela , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , qemu-arm@nongnu.org, Jason Wang , Gerd Hoffmann , Hanna Reitz , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , BALATON Zoltan , Daniel Henrique Barboza , Elena Ufimtseva , Aurelien Jarno , Hailiang Zhang , Roman Bolshakov , Huacai Chen , Fam Zheng , Eric Blake , Jiri Slaby , Alexander Graf , Liu Zhiwei , Weiwei Li , Eric Farman , Stafford Horne , David Hildenbrand , Markus Armbruster , Reinoud Zandijk , Palmer Dabbelt , Cameron Esfahani , xen-devel@lists.xenproject.org, Pavel Dovgalyuk , qemu-riscv@nongnu.org, Aleksandar Rikalo , John Snow , Sunil Muthuswamy , Michael Roth , David Gibson , "Michael S. Tsirkin" , Richard Henderson , Bin Meng , Stefano Stabellini , kvm@vger.kernel.org, Stefan Hajnoczi , qemu-block@nongnu.org, Halil Pasic , Peter Xu , Anthony Perard , Harsh Prateek Bora , =?utf-8?q?Alex_Benn=C3=A9e?= , Eduardo Habkost , Paolo Bonzini , Vladimir Sementsov-Ogievskiy , =?utf-8?q?C=C3=A9?= =?utf-8?q?dric_Le_Goater?= , qemu-ppc@nongnu.org, =?utf-8?q?P?= =?utf-8?q?hilippe_Mathieu-Daud=C3=A9?= , Christian Borntraeger , Akihiko Odaki , Leonardo Bras , Nicholas Piggin , Jiaxun Yang Subject: [PATCH 3/6] qemu/main-loop: rename qemu_cond_wait_iothread() to qemu_cond_wait_bql() Date: Wed, 29 Nov 2023 16:26:22 -0500 Message-ID: <20231129212625.1051502-4-stefanha@redhat.com> In-Reply-To: <20231129212625.1051502-1-stefanha@redhat.com> References: <20231129212625.1051502-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 The name "iothread" is overloaded. Use the term Big QEMU Lock (BQL) instead, it is already widely used and unambiguous. Signed-off-by: Stefan Hajnoczi Reviewed-by: Cédric Le Goater Reviewed-by: Philippe Mathieu-Daudé --- include/qemu/main-loop.h | 8 ++++---- accel/tcg/tcg-accel-ops-rr.c | 4 ++-- hw/display/virtio-gpu.c | 2 +- hw/ppc/spapr_events.c | 2 +- system/cpu-throttle.c | 2 +- system/cpus.c | 4 ++-- target/i386/nvmm/nvmm-accel-ops.c | 2 +- target/i386/whpx/whpx-accel-ops.c | 2 +- 8 files changed, 13 insertions(+), 13 deletions(-) diff --git a/include/qemu/main-loop.h b/include/qemu/main-loop.h index 0b6a3e4824..ec2a70f041 100644 --- a/include/qemu/main-loop.h +++ b/include/qemu/main-loop.h @@ -373,17 +373,17 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC(BQLLockAuto, qemu_bql_auto_unlock) = qemu_bql_auto_lock(__FILE__, __LINE__) /* - * qemu_cond_wait_iothread: Wait on condition for the main loop mutex + * qemu_cond_wait_bql: Wait on condition for the main loop mutex * * This function atomically releases the main loop mutex and causes * the calling thread to block on the condition. */ -void qemu_cond_wait_iothread(QemuCond *cond); +void qemu_cond_wait_bql(QemuCond *cond); /* - * qemu_cond_timedwait_iothread: like the previous, but with timeout + * qemu_cond_timedwait_bql: like the previous, but with timeout */ -void qemu_cond_timedwait_iothread(QemuCond *cond, int ms); +void qemu_cond_timedwait_bql(QemuCond *cond, int ms); /* internal interfaces */ diff --git a/accel/tcg/tcg-accel-ops-rr.c b/accel/tcg/tcg-accel-ops-rr.c index c21215a094..1e5a688085 100644 --- a/accel/tcg/tcg-accel-ops-rr.c +++ b/accel/tcg/tcg-accel-ops-rr.c @@ -111,7 +111,7 @@ static void rr_wait_io_event(void) while (all_cpu_threads_idle()) { rr_stop_kick_timer(); - qemu_cond_wait_iothread(first_cpu->halt_cond); + qemu_cond_wait_bql(first_cpu->halt_cond); } rr_start_kick_timer(); @@ -198,7 +198,7 @@ static void *rr_cpu_thread_fn(void *arg) /* wait for initial kick-off after machine start */ while (first_cpu->stopped) { - qemu_cond_wait_iothread(first_cpu->halt_cond); + qemu_cond_wait_bql(first_cpu->halt_cond); /* process any pending work */ CPU_FOREACH(cpu) { diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c index b016d3bac8..67c5be1a4e 100644 --- a/hw/display/virtio-gpu.c +++ b/hw/display/virtio-gpu.c @@ -1512,7 +1512,7 @@ void virtio_gpu_reset(VirtIODevice *vdev) g->reset_finished = false; qemu_bh_schedule(g->reset_bh); while (!g->reset_finished) { - qemu_cond_wait_iothread(&g->reset_cond); + qemu_cond_wait_bql(&g->reset_cond); } } else { virtio_gpu_reset_bh(g); diff --git a/hw/ppc/spapr_events.c b/hw/ppc/spapr_events.c index deb4641505..cb0eeee587 100644 --- a/hw/ppc/spapr_events.c +++ b/hw/ppc/spapr_events.c @@ -899,7 +899,7 @@ void spapr_mce_req_event(PowerPCCPU *cpu, bool recovered) } return; } - qemu_cond_wait_iothread(&spapr->fwnmi_machine_check_interlock_cond); + qemu_cond_wait_bql(&spapr->fwnmi_machine_check_interlock_cond); if (spapr->fwnmi_machine_check_addr == -1) { /* * If the machine was reset while waiting for the interlock, diff --git a/system/cpu-throttle.c b/system/cpu-throttle.c index e98836311b..1d2b73369e 100644 --- a/system/cpu-throttle.c +++ b/system/cpu-throttle.c @@ -54,7 +54,7 @@ static void cpu_throttle_thread(CPUState *cpu, run_on_cpu_data opaque) endtime_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + sleeptime_ns; while (sleeptime_ns > 0 && !cpu->stop) { if (sleeptime_ns > SCALE_MS) { - qemu_cond_timedwait_iothread(cpu->halt_cond, + qemu_cond_timedwait_bql(cpu->halt_cond, sleeptime_ns / SCALE_MS); } else { qemu_bql_unlock(); diff --git a/system/cpus.c b/system/cpus.c index d5b98c11f5..eb24a4db8e 100644 --- a/system/cpus.c +++ b/system/cpus.c @@ -513,12 +513,12 @@ void qemu_bql_unlock(void) qemu_mutex_unlock(&qemu_global_mutex); } -void qemu_cond_wait_iothread(QemuCond *cond) +void qemu_cond_wait_bql(QemuCond *cond) { qemu_cond_wait(cond, &qemu_global_mutex); } -void qemu_cond_timedwait_iothread(QemuCond *cond, int ms) +void qemu_cond_timedwait_bql(QemuCond *cond, int ms) { qemu_cond_timedwait(cond, &qemu_global_mutex, ms); } diff --git a/target/i386/nvmm/nvmm-accel-ops.c b/target/i386/nvmm/nvmm-accel-ops.c index 387ccfcce5..0fe8a76820 100644 --- a/target/i386/nvmm/nvmm-accel-ops.c +++ b/target/i386/nvmm/nvmm-accel-ops.c @@ -48,7 +48,7 @@ static void *qemu_nvmm_cpu_thread_fn(void *arg) } } while (cpu_thread_is_idle(cpu)) { - qemu_cond_wait_iothread(cpu->halt_cond); + qemu_cond_wait_bql(cpu->halt_cond); } qemu_wait_io_event_common(cpu); } while (!cpu->unplug || cpu_can_run(cpu)); diff --git a/target/i386/whpx/whpx-accel-ops.c b/target/i386/whpx/whpx-accel-ops.c index 1f29346a88..d8bec46081 100644 --- a/target/i386/whpx/whpx-accel-ops.c +++ b/target/i386/whpx/whpx-accel-ops.c @@ -48,7 +48,7 @@ static void *whpx_cpu_thread_fn(void *arg) } } while (cpu_thread_is_idle(cpu)) { - qemu_cond_wait_iothread(cpu->halt_cond); + qemu_cond_wait_bql(cpu->halt_cond); } qemu_wait_io_event_common(cpu); } while (!cpu->unplug || cpu_can_run(cpu));