From patchwork Thu Nov 19 12:59:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Gruzdev X-Patchwork-Id: 11917305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28210C63697 for ; Thu, 19 Nov 2020 13:02:25 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9571F21D7A for ; Thu, 19 Nov 2020 13:02:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9571F21D7A Authentication-Results: mail.kernel.org; dmarc=pass (p=none dis=none) header.from=nongnu.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:58172 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kfjZv-0007PS-Eu for qemu-devel@archiver.kernel.org; Thu, 19 Nov 2020 08:02:23 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:55074) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kfjY9-0005eo-EK for qemu-devel@nongnu.org; Thu, 19 Nov 2020 08:00:34 -0500 Received: from relay.sw.ru ([185.231.240.75]:49450 helo=relay3.sw.ru) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kfjY5-00008q-KE for qemu-devel@nongnu.org; Thu, 19 Nov 2020 08:00:33 -0500 Received: from [192.168.15.221] (helo=andrey-MS-7B54.sw.ru) by relay3.sw.ru with esmtp (Exim 4.94) (envelope-from ) id 1kfjXq-009NTF-Mz; Thu, 19 Nov 2020 16:00:14 +0300 To: qemu-devel@nongnu.org Cc: Den Lunev , Eric Blake , Paolo Bonzini , Juan Quintela , "Dr . David Alan Gilbert" , Markus Armbruster , Peter Xu , Andrey Gruzdev Subject: [PATCH v3 3/7] support UFFD write fault processing in ram_save_iterate() Date: Thu, 19 Nov 2020 15:59:36 +0300 Message-Id: <20201119125940.20017-4-andrey.gruzdev@virtuozzo.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201119125940.20017-1-andrey.gruzdev@virtuozzo.com> References: <20201119125940.20017-1-andrey.gruzdev@virtuozzo.com> MIME-Version: 1.0 Received-SPF: pass client-ip=185.231.240.75; envelope-from=andrey.gruzdev@virtuozzo.com; helo=relay3.sw.ru X-detected-operating-system: by eggs.gnu.org: First seen = 2020/11/19 07:59:53 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Reply-to: Andrey Gruzdev X-Patchwork-Original-From: Andrey Gruzdev via From: Andrey Gruzdev In this particular implementation the same single migration thread is responsible for both normal linear dirty page migration and procesing UFFD page fault events. Processing write faults includes reading UFFD file descriptor, finding respective RAM block and saving faulting page to the migration stream. After page has been saved, write protection can be removed. Since asynchronous version of qemu_put_buffer() is expected to be used to save pages, we also have to flush migraion stream prior to un-protecting saved memory range. Write protection is being removed for any previously protected memory chunk that has hit the migration stream. That's valid for pages from linear page scan along with write fault pages. Signed-off-by: Andrey Gruzdev --- migration/ram.c | 124 ++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 115 insertions(+), 9 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 7f273c9996..08a1d7a252 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -314,6 +314,8 @@ struct RAMState { ram_addr_t last_page; /* last ram version we have seen */ uint32_t last_version; + /* 'write-tracking' migration is enabled */ + bool ram_wt_enabled; /* We are in the first round */ bool ram_bulk_stage; /* The free page optimization is enabled */ @@ -574,8 +576,6 @@ static int uffd_protect_memory(int uffd, hwaddr start, hwaddr length, bool wp) return 0; } -__attribute__ ((unused)) -static int uffd_read_events(int uffd, struct uffd_msg *msgs, int count); __attribute__ ((unused)) static bool uffd_poll_events(int uffd, int tmo); @@ -1929,6 +1929,86 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss, return pages; } +/** + * ram_find_block_by_host_address: find RAM block containing host page + * + * Returns true if RAM block is found and pss->block/page are + * pointing to the given host page, false in case of an error + * + * @rs: current RAM state + * @pss: page-search-status structure + */ +static bool ram_find_block_by_host_address(RAMState *rs, PageSearchStatus *pss, + hwaddr page_address) +{ + bool found = false; + + pss->block = rs->last_seen_block; + do { + if (page_address >= (hwaddr) pss->block->host && + (page_address + TARGET_PAGE_SIZE) <= + ((hwaddr) pss->block->host + pss->block->used_length)) { + pss->page = (unsigned long) + ((page_address - (hwaddr) pss->block->host) >> TARGET_PAGE_BITS); + found = true; + break; + } + + pss->block = QLIST_NEXT_RCU(pss->block, next); + if (!pss->block) { + /* Hit the end of the list */ + pss->block = QLIST_FIRST_RCU(&ram_list.blocks); + } + } while (pss->block != rs->last_seen_block); + + rs->last_seen_block = pss->block; + /* + * Since we are in the same loop with ram_find_and_save_block(), + * need to reset pss->complete_round after switching to + * other block/page in pss. + */ + pss->complete_round = false; + + return found; +} + +/** + * get_fault_page: try to get next UFFD write fault page and, if pending fault + * is found, put info about RAM block and block page into pss structure + * + * Returns true in case of UFFD write fault detected, false otherwise + * + * @rs: current RAM state + * @pss: page-search-status structure + * + */ +static bool get_fault_page(RAMState *rs, PageSearchStatus *pss) +{ + struct uffd_msg uffd_msg; + hwaddr page_address; + int res; + + if (!rs->ram_wt_enabled) { + return false; + } + + res = uffd_read_events(rs->uffdio_fd, &uffd_msg, 1); + if (res <= 0) { + return false; + } + + page_address = uffd_msg.arg.pagefault.address; + if (!ram_find_block_by_host_address(rs, pss, page_address)) { + /* In case we couldn't find respective block, just unprotect faulting page */ + uffd_protect_memory(rs->uffdio_fd, page_address, TARGET_PAGE_SIZE, false); + error_report("ram_find_block_by_host_address() failed: address=0x%0lx", + page_address); + return false; + } + + return true; +} + /** * ram_find_and_save_block: finds a dirty page and sends it to f * @@ -1955,25 +2035,50 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage) return pages; } + if (!rs->last_seen_block) { + rs->last_seen_block = QLIST_FIRST_RCU(&ram_list.blocks); + } pss.block = rs->last_seen_block; pss.page = rs->last_page; pss.complete_round = false; - if (!pss.block) { - pss.block = QLIST_FIRST_RCU(&ram_list.blocks); - } - do { + ram_addr_t page; + ram_addr_t page_to; + again = true; - found = get_queued_page(rs, &pss); - + /* In case of 'write-tracking' migration we first try + * to poll UFFD and get write page fault event */ + found = get_fault_page(rs, &pss); + if (!found) { + /* No trying to fetch something from the priority queue */ + found = get_queued_page(rs, &pss); + } if (!found) { /* priority queue empty, so just search for something dirty */ found = find_dirty_block(rs, &pss, &again); } if (found) { + page = pss.page; pages = ram_save_host_page(rs, &pss, last_stage); + page_to = pss.page; + + /* Check if page is from UFFD-managed region */ + if (pss.block->flags & RAM_UF_WRITEPROTECT) { + hwaddr page_address = (hwaddr) pss.block->host + + ((hwaddr) page << TARGET_PAGE_BITS); + hwaddr run_length = (hwaddr) (page_to - page + 1) << TARGET_PAGE_BITS; + int res; + + /* Flush async buffers before un-protect */ + qemu_fflush(rs->f); + /* Un-protect memory range */ + res = uffd_protect_memory(rs->uffdio_fd, page_address, run_length, false); + if (res < 0) { + break; + } + } } } while (!pages && again); @@ -2086,7 +2191,8 @@ static void ram_state_reset(RAMState *rs) rs->last_sent_block = NULL; rs->last_page = 0; rs->last_version = ram_list.version; - rs->ram_bulk_stage = true; + rs->ram_wt_enabled = migrate_track_writes_ram(); + rs->ram_bulk_stage = !rs->ram_wt_enabled; rs->fpo_enabled = false; }