From patchwork Fri Apr 1 13:19:51 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 8724621 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 81FDAC0553 for ; Fri, 1 Apr 2016 13:25:21 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B83D1203B6 for ; Fri, 1 Apr 2016 13:25:20 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DD90A203AE for ; Fri, 1 Apr 2016 13:25:19 +0000 (UTC) Received: from localhost ([::1]:44374 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1alz4g-0000Dr-AX for patchwork-qemu-devel@patchwork.kernel.org; Fri, 01 Apr 2016 09:25:18 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56311) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1alyzh-0008SB-1i for qemu-devel@nongnu.org; Fri, 01 Apr 2016 09:20:14 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1alyzc-0001sR-SR for qemu-devel@nongnu.org; Fri, 01 Apr 2016 09:20:08 -0400 Received: from mail-wm0-x244.google.com ([2a00:1450:400c:c09::244]:33477) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1alyzc-0001s1-Ik for qemu-devel@nongnu.org; Fri, 01 Apr 2016 09:20:04 -0400 Received: by mail-wm0-x244.google.com with SMTP id i204so4581143wmd.0 for ; Fri, 01 Apr 2016 06:20:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=KLBPKo8ipNQsn3Wt54InlhTkWF30ZLaPX53zlEgjU/M=; b=Aumb2AEI0bNWi7NyN7K17oJj76rWGmXQ9sYFf61wWG6CBP0DIbwHVVnrOUC0rgDVnH C9CVL8NDXHI0in1yv+gqKSg2M5LtRMOYD+CpFGBboGHgI/0pEJx/wnaedyQ2i5MFSuuQ Bcl5AeeODVhf+Q1XsJfFcJ4UfFyOX1ZTkD4fuFFa9I0XHJveDjn4BmJ3b0TK7egSFDVL 7wM9QkvV3AnhA9j1G3TMEK9cGmB3TKqlBThZ/YFisXFbqn5bERxJPwtxBnM2SyC+J1zR GeqbwRHU4/NWq1WpBt5y0GOfmBRH09xScn0LNGR5sh8xEVDarRV2hdoYdZ+W8z2VMClo B5mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=KLBPKo8ipNQsn3Wt54InlhTkWF30ZLaPX53zlEgjU/M=; b=WmlhJ1WXLEilfvH79h+se67fmnIUauSBBBLfyQyvU9/OL5uJL5BunCzGY6N6O74Wrh DLpjPZQpMaIGeFV7lwislILlwChLiW1G6NuPcDAH+fj1M1aR0TgROO1re+T1Rmgl4tWb +Wt9Il+ad+dcba/P/YvjscKhTkQLdIhq7c3Na4lTe6pwYrsKFqC/3uRQtp6MpwAVtIep WCvRhXQb9avNezd137INme8vQYu8bBymaFkMQImBfOTwjDXytm7zC/8+fZi3KFkMnzf+ UXdnWygIhyIKR1kgCvJAuFwfkcOUZz4ohDF9noxsAXqRUwoy9ShioaRZzTEq15Aptsah zZ+w== X-Gm-Message-State: AD7BkJKs3oZBD62gOGvmS5xxRN7Cn3HJn6ttHXqXkzjPkiPOD7mRkjRgR/3GeWvlti8KzQ== X-Received: by 10.28.146.202 with SMTP id u193mr4018640wmd.82.1459516803995; Fri, 01 Apr 2016 06:20:03 -0700 (PDT) Received: from 640k.lan (94-39-141-76.adsl-ull.clienti.tiscali.it. [94.39.141.76]) by smtp.gmail.com with ESMTPSA id a1sm14032334wje.43.2016.04.01.06.20.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 01 Apr 2016 06:20:03 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Fri, 1 Apr 2016 15:19:51 +0200 Message-Id: <1459516794-23629-7-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1459516794-23629-1-git-send-email-pbonzini@redhat.com> References: <1459516794-23629-1-git-send-email-pbonzini@redhat.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::244 Cc: famz@redhat.com, tubo@linux.vnet.ibm.com, mst@redhat.com, borntraeger@de.ibm.com, stefanha@redhat.com, cornelia.huck@de.ibm.com Subject: [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: "Michael S. Tsirkin" In addition to handling IO in vcpu thread and in io thread, dataplane introduces yet another mode: handling it by aio. This reuses the same handler as previous modes, which triggers races as these were not designed to be reentrant. Use a separate handler just for aio, and disable regular handlers when dataplane is active. Signed-off-by: Michael S. Tsirkin Signed-off-by: Paolo Bonzini Reviewed-by: Cornelia Huck --- hw/block/dataplane/virtio-blk.c | 13 +++++++++++++ hw/block/virtio-blk.c | 27 +++++++++++++++++---------- include/hw/virtio/virtio-blk.h | 2 ++ 3 files changed, 32 insertions(+), 10 deletions(-) diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index ed9d0ce..fd06726 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -184,6 +184,17 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s) g_free(s); } +static void virtio_blk_data_plane_handle_output(VirtIODevice *vdev, + VirtQueue *vq) +{ + VirtIOBlock *s = (VirtIOBlock *)vdev; + + assert(s->dataplane); + assert(s->dataplane_started); + + virtio_blk_handle_vq(s, vq); +} + /* Context: QEMU global mutex held */ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s) { @@ -226,6 +237,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s) /* Get this show started by hooking up our callbacks */ aio_context_acquire(s->ctx); + virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output); virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true); aio_context_release(s->ctx); return; @@ -262,6 +274,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s) /* Stop notifications for new requests from guest */ virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false); + virtio_set_queue_aio(s->vq, NULL); /* Drain and switch bs back to the QEMU main loop */ blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context()); diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 151fe78..3f88f8c 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -578,20 +578,11 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb) } } -static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) +void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq) { - VirtIOBlock *s = VIRTIO_BLK(vdev); VirtIOBlockReq *req; MultiReqBuffer mrb = {}; - /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start - * dataplane here instead of waiting for .set_status(). - */ - if (s->dataplane && !s->dataplane_started) { - virtio_blk_data_plane_start(s->dataplane); - return; - } - blk_io_plug(s->blk); while ((req = virtio_blk_get_request(s))) { @@ -605,6 +596,22 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) blk_io_unplug(s->blk); } +static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) +{ + VirtIOBlock *s = (VirtIOBlock *)vdev; + + if (s->dataplane) { + /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start + * dataplane here instead of waiting for .set_status(). + */ + virtio_blk_data_plane_start(s->dataplane); + if (!s->dataplane_disabled) { + return; + } + } + virtio_blk_handle_vq(s, vq); +} + static void virtio_blk_dma_restart_bh(void *opaque) { VirtIOBlock *s = opaque; diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h index 59ae1e4..8f2b056 100644 --- a/include/hw/virtio/virtio-blk.h +++ b/include/hw/virtio/virtio-blk.h @@ -86,4 +86,6 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb); void virtio_blk_submit_multireq(BlockBackend *blk, MultiReqBuffer *mrb); +void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq); + #endif