From patchwork Sat Sep 23 11:14:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manos Pitsidianakis X-Patchwork-Id: 9967469 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8831B602CB for ; Sat, 23 Sep 2017 11:16:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 800F629882 for ; Sat, 23 Sep 2017 11:16:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 745D329885; Sat, 23 Sep 2017 11:16:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DB8A529884 for ; Sat, 23 Sep 2017 11:16:18 +0000 (UTC) Received: from localhost ([::1]:34465 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dviPk-0004NW-Ai for patchwork-qemu-devel@patchwork.kernel.org; Sat, 23 Sep 2017 07:16:04 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56671) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dviOQ-0004Kh-8N for qemu-devel@nongnu.org; Sat, 23 Sep 2017 07:14:43 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dviOP-0001BT-0T for qemu-devel@nongnu.org; Sat, 23 Sep 2017 07:14:42 -0400 Received: from smtp1.ntua.gr ([2001:648:2000:de::183]:13166) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dviOI-00016Z-5o; Sat, 23 Sep 2017 07:14:34 -0400 Received: from mail.ntua.gr (carp0.noc.ntua.gr [147.102.222.60]) (authenticated bits=0) by smtp1.ntua.gr (8.15.2/8.15.2) with ESMTPSA id v8NBETuW074779 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 23 Sep 2017 14:14:29 +0300 (EEST) (envelope-from el13635@mail.ntua.gr) X-Authentication-Warning: smtp1.ntua.gr: Host carp0.noc.ntua.gr [147.102.222.60] claimed to be mail.ntua.gr From: Manos Pitsidianakis To: qemu-devel Date: Sat, 23 Sep 2017 14:14:09 +0300 Message-Id: <20170923111411.18626-2-el13635@mail.ntua.gr> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170923111411.18626-1-el13635@mail.ntua.gr> References: <20170923111411.18626-1-el13635@mail.ntua.gr> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2001:648:2000:de::183 Subject: [Qemu-devel] [PATCH v3 1/3] block: add bdrv_co_drain_end callback X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Stefan Hajnoczi , qemu-block , Max Reitz Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP BlockDriverState has a bdrv_co_drain() callback but no equivalent for the end of the drain. The throttle driver (block/throttle.c) needs a way to mark the end of the drain in order to toggle io_limits_disabled correctly, thus bdrv_co_drain_end is needed. Signed-off-by: Manos Pitsidianakis Reviewed-by: Fam Zheng Reviewed-by: Stefan Hajnoczi --- include/block/block_int.h | 11 +++++++++-- block/io.c | 48 +++++++++++++++++++++++++++++++++-------------- 2 files changed, 43 insertions(+), 16 deletions(-) diff --git a/include/block/block_int.h b/include/block/block_int.h index ba4c383393..9ebdeb6db0 100644 --- a/include/block/block_int.h +++ b/include/block/block_int.h @@ -354,10 +354,17 @@ struct BlockDriver { int (*bdrv_probe_geometry)(BlockDriverState *bs, HDGeometry *geo); /** - * Drain and stop any internal sources of requests in the driver, and - * remain so until next I/O callback (e.g. bdrv_co_writev) is called. + * bdrv_co_drain is called if implemented in the beginning of a + * drain operation to drain and stop any internal sources of requests in + * the driver. + * bdrv_co_drain_end is called if implemented at the end of the drain. + * + * They should be used by the driver to e.g. manage scheduled I/O + * requests, or toggle an internal state. After the end of the drain new + * requests will continue normally. */ void coroutine_fn (*bdrv_co_drain)(BlockDriverState *bs); + void coroutine_fn (*bdrv_co_drain_end)(BlockDriverState *bs); void (*bdrv_add_child)(BlockDriverState *parent, BlockDriverState *child, Error **errp); diff --git a/block/io.c b/block/io.c index 4378ae4c7d..b0a10ad3ef 100644 --- a/block/io.c +++ b/block/io.c @@ -153,6 +153,7 @@ typedef struct { Coroutine *co; BlockDriverState *bs; bool done; + bool begin; } BdrvCoDrainData; static void coroutine_fn bdrv_drain_invoke_entry(void *opaque) @@ -160,18 +161,23 @@ static void coroutine_fn bdrv_drain_invoke_entry(void *opaque) BdrvCoDrainData *data = opaque; BlockDriverState *bs = data->bs; - bs->drv->bdrv_co_drain(bs); + if (data->begin) { + bs->drv->bdrv_co_drain(bs); + } else { + bs->drv->bdrv_co_drain_end(bs); + } /* Set data->done before reading bs->wakeup. */ atomic_mb_set(&data->done, true); bdrv_wakeup(bs); } -static void bdrv_drain_invoke(BlockDriverState *bs) +static void bdrv_drain_invoke(BlockDriverState *bs, bool begin) { - BdrvCoDrainData data = { .bs = bs, .done = false }; + BdrvCoDrainData data = { .bs = bs, .done = false, .begin = begin}; - if (!bs->drv || !bs->drv->bdrv_co_drain) { + if (!bs->drv || (begin && !bs->drv->bdrv_co_drain) || + (!begin && !bs->drv->bdrv_co_drain_end)) { return; } @@ -180,15 +186,16 @@ static void bdrv_drain_invoke(BlockDriverState *bs) BDRV_POLL_WHILE(bs, !data.done); } -static bool bdrv_drain_recurse(BlockDriverState *bs) +static bool bdrv_drain_recurse(BlockDriverState *bs, bool begin) { BdrvChild *child, *tmp; bool waited; - waited = BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0); - /* Ensure any pending metadata writes are submitted to bs->file. */ - bdrv_drain_invoke(bs); + bdrv_drain_invoke(bs, begin); + + /* Wait for drained requests to finish */ + waited = BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0); QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) { BlockDriverState *bs = child->bs; @@ -205,7 +212,7 @@ static bool bdrv_drain_recurse(BlockDriverState *bs) */ bdrv_ref(bs); } - waited |= bdrv_drain_recurse(bs); + waited |= bdrv_drain_recurse(bs, begin); if (in_main_loop) { bdrv_unref(bs); } @@ -221,12 +228,18 @@ static void bdrv_co_drain_bh_cb(void *opaque) BlockDriverState *bs = data->bs; bdrv_dec_in_flight(bs); - bdrv_drained_begin(bs); + if (data->begin) { + bdrv_drained_begin(bs); + } else { + bdrv_drained_end(bs); + } + data->done = true; aio_co_wake(co); } -static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs) +static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs, + bool begin) { BdrvCoDrainData data; @@ -239,6 +252,7 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs) .co = qemu_coroutine_self(), .bs = bs, .done = false, + .begin = begin, }; bdrv_inc_in_flight(bs); aio_bh_schedule_oneshot(bdrv_get_aio_context(bs), @@ -253,7 +267,7 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs) void bdrv_drained_begin(BlockDriverState *bs) { if (qemu_in_coroutine()) { - bdrv_co_yield_to_drain(bs); + bdrv_co_yield_to_drain(bs, true); return; } @@ -262,17 +276,22 @@ void bdrv_drained_begin(BlockDriverState *bs) bdrv_parent_drained_begin(bs); } - bdrv_drain_recurse(bs); + bdrv_drain_recurse(bs, true); } void bdrv_drained_end(BlockDriverState *bs) { + if (qemu_in_coroutine()) { + bdrv_co_yield_to_drain(bs, false); + return; + } assert(bs->quiesce_counter > 0); if (atomic_fetch_dec(&bs->quiesce_counter) > 1) { return; } bdrv_parent_drained_end(bs); + bdrv_drain_recurse(bs, false); aio_enable_external(bdrv_get_aio_context(bs)); } @@ -350,7 +369,7 @@ void bdrv_drain_all_begin(void) aio_context_acquire(aio_context); for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { if (aio_context == bdrv_get_aio_context(bs)) { - waited |= bdrv_drain_recurse(bs); + waited |= bdrv_drain_recurse(bs, true); } } aio_context_release(aio_context); @@ -371,6 +390,7 @@ void bdrv_drain_all_end(void) aio_context_acquire(aio_context); aio_enable_external(aio_context); bdrv_parent_drained_end(bs); + bdrv_drain_recurse(bs, false); aio_context_release(aio_context); }