From patchwork Tue Feb 9 11:45:58 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 8261881 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 4C84DBEEE5 for ; Tue, 9 Feb 2016 13:25:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 81AF720260 for ; Tue, 9 Feb 2016 13:25:18 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B1CE92022A for ; Tue, 9 Feb 2016 13:25:17 +0000 (UTC) Received: from localhost ([::1]:54699 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aT6pt-0006Uf-Pt for patchwork-qemu-devel@patchwork.kernel.org; Tue, 09 Feb 2016 06:52:01 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37544) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aT6kO-000667-QI for qemu-devel@nongnu.org; Tue, 09 Feb 2016 06:46:21 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aT6kL-0005EW-Gy for qemu-devel@nongnu.org; Tue, 09 Feb 2016 06:46:20 -0500 Received: from mx1.redhat.com ([209.132.183.28]:56897) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aT6kL-0005EN-1o for qemu-devel@nongnu.org; Tue, 09 Feb 2016 06:46:17 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (Postfix) with ESMTPS id A844F8E3F2 for ; Tue, 9 Feb 2016 11:46:16 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-112-47.ams2.redhat.com [10.36.112.47]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u19BkELi002965; Tue, 9 Feb 2016 06:46:15 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Tue, 9 Feb 2016 12:45:58 +0100 Message-Id: <1455018374-4706-1-git-send-email-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: stefanha@redhat.com Subject: [Qemu-devel] [PATCH v3 00/16] aio: first part of aio_context_acquire/release pushdown X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This is the infrastructure part of the aio_context_acquire/release pushdown, which in turn is the first step towards a real multiqueue block layer in QEMU. The next step is to touch all the drivers and move calls to the aio_context_acquire/release functions from aio-*.c to the drivers. This will be done in a separate patch series, which I plan to post before soft freeze. While the inserted lines are a lot, more than half of it are in documentation and formal models of the code, as well as in the implementation of the new "lockcnt" synchronization primitive. The code is also very heavily commented. The first four patches are new, as the issue they fix was found after posting the previous patch. Everything else is more or less the same as before. Paolo v1->v2: Update documentation [Stefan] Remove g_usleep from testcase [Stefan] v2->v3: Fix broken sentence [Eric] Use osdep.h [Eric] (v2->v3 diff after diffstat) Paolo Bonzini (16): aio: introduce aio_context_in_iothread aio: do not really acquire/release the main AIO context aio: introduce aio_poll_internal aio: only call aio_poll_internal from iothread iothread: release AioContext around aio_poll qemu-thread: introduce QemuRecMutex aio: convert from RFifoLock to QemuRecMutex aio: rename bh_lock to list_lock qemu-thread: introduce QemuLockCnt aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh qemu-thread: optimize QemuLockCnt with futexes on Linux aio: tweak walking in dispatch phase aio-posix: remove walking_handlers, protecting AioHandler list with list_lock aio-win32: remove walking_handlers, protecting AioHandler list with list_lock aio: document locking aio: push aio_context_acquire/release down to dispatching aio-posix.c | 86 +++++---- aio-win32.c | 106 ++++++----- async.c | 278 ++++++++++++++++++++++++---- block/io.c | 14 +- docs/aio_poll_drain.promela | 210 +++++++++++++++++++++ docs/aio_poll_drain_bug.promela | 158 ++++++++++++++++ docs/aio_poll_sync_io.promela | 88 +++++++++ docs/lockcnt.txt | 342 ++++++++++++++++++++++++++++++++++ docs/multiple-iothreads.txt | 63 ++++--- include/block/aio.h | 69 ++++--- include/qemu/futex.h | 36 ++++ include/qemu/rfifolock.h | 54 ------ include/qemu/thread-posix.h | 6 + include/qemu/thread-win32.h | 10 + include/qemu/thread.h | 23 +++ iothread.c | 20 +- stubs/iothread-lock.c | 5 + tests/.gitignore | 1 - tests/Makefile | 2 - tests/test-aio.c | 22 ++- tests/test-rfifolock.c | 91 --------- trace-events | 10 + util/Makefile.objs | 2 +- util/lockcnt.c | 395 ++++++++++++++++++++++++++++++++++++++++ util/qemu-thread-posix.c | 38 ++-- util/qemu-thread-win32.c | 25 +++ util/rfifolock.c | 78 -------- 27 files changed, 1782 insertions(+), 450 deletions(-) create mode 100644 docs/aio_poll_drain.promela create mode 100644 docs/aio_poll_drain_bug.promela create mode 100644 docs/aio_poll_sync_io.promela create mode 100644 docs/lockcnt.txt create mode 100644 include/qemu/futex.h delete mode 100644 include/qemu/rfifolock.h delete mode 100644 tests/test-rfifolock.c create mode 100644 util/lockcnt.c delete mode 100644 util/rfifolock.c diff --git a/async.c b/async.c index 9eab833..03a8e69 100644 --- a/async.c +++ b/async.c @@ -322,11 +322,10 @@ void aio_notify_accept(AioContext *ctx) * only, this only works when the calling thread holds the big QEMU lock. * * Because aio_poll is used in a loop, spurious wakeups are okay. - * Therefore, the I/O thread calls qemu_event_set very liberally - * (it helps that qemu_event_set is cheap on an already-set event). - * generally used in a loop, it's okay to have spurious wakeups. - * Similarly it is okay to return true when no progress was made - * (as long as this doesn't happen forever, or you get livelock). + * Therefore, the I/O thread calls qemu_event_set very liberally; + * it helps that qemu_event_set is cheap on an already-set event. + * Similarly it is okay to return true when no progress was made, + * as long as this doesn't happen forever (or you get livelock). * * The important thing is that you need to report progress from * aio_poll(ctx, false) correctly. This is complicated and the diff --git a/util/lockcnt.c b/util/lockcnt.c index 56eb29e..71e8f8f 100644 --- a/util/lockcnt.c +++ b/util/lockcnt.c @@ -6,16 +6,7 @@ * Author: * Paolo Bonzini */ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include +#include "qemu/osdep.h" #include "qemu/thread.h" #include "qemu/atomic.h" #include "trace.h"