From patchwork Wed Nov 2 06:17:27 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liang Li X-Patchwork-Id: 9408613 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 672C660234 for ; Wed, 2 Nov 2016 06:39:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4DA6D29E4E for ; Wed, 2 Nov 2016 06:39:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 425E229E51; Wed, 2 Nov 2016 06:39:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9734329E4E for ; Wed, 2 Nov 2016 06:39:35 +0000 (UTC) Received: from localhost ([::1]:52886 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c1pCw-0001MV-S9 for patchwork-qemu-devel@patchwork.kernel.org; Wed, 02 Nov 2016 02:39:34 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56572) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c1p4h-00033y-E8 for qemu-devel@nongnu.org; Wed, 02 Nov 2016 02:31:04 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1c1p4e-0004sE-OU for qemu-devel@nongnu.org; Wed, 02 Nov 2016 02:31:03 -0400 Received: from mga05.intel.com ([192.55.52.43]:40853) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1c1p4e-0004rr-Dh for qemu-devel@nongnu.org; Wed, 02 Nov 2016 02:31:00 -0400 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP; 01 Nov 2016 23:30:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.31,583,1473145200"; d="scan'208"; a="1053904903" Received: from ll.sh.intel.com (HELO localhost) ([10.239.13.123]) by orsmga001.jf.intel.com with ESMTP; 01 Nov 2016 23:30:56 -0700 From: Liang Li To: mst@redhat.com, dave.hansen@intel.com Date: Wed, 2 Nov 2016 14:17:27 +0800 Message-Id: <1478067447-24654-8-git-send-email-liang.z.li@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1478067447-24654-1-git-send-email-liang.z.li@intel.com> References: <1478067447-24654-1-git-send-email-liang.z.li@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.43 Subject: [Qemu-devel] [PATCH kernel v4 7/7] virtio-balloon: tell host vm's unused page info X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: virtio-dev@lists.oasis-open.org, cornelia.huck@de.ibm.com, kvm@vger.kernel.org, quintela@redhat.com, linux-kernel@vger.kernel.org, Liang Li , qemu-devel@nongnu.org, dgilbert@redhat.com, linux-mm@kvack.org, amit.shah@redhat.com, pbonzini@redhat.com, virtualization@lists.linux-foundation.org, mgorman@techsingularity.net Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Support the request for vm's unused page information, response with a page bitmap. QEMU can make use of this bitmap and the dirty page logging mechanism to skip the transportation of some of these unused pages, this is very helpful to reduce the network traffic and speed up the live migration process. Signed-off-by: Liang Li Cc: Michael S. Tsirkin Cc: Paolo Bonzini Cc: Cornelia Huck Cc: Amit Shah Cc: Dave Hansen --- drivers/virtio/virtio_balloon.c | 128 +++++++++++++++++++++++++++++++++++++--- 1 file changed, 121 insertions(+), 7 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index c6c94b6..ba2d37b 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -56,7 +56,7 @@ struct virtio_balloon { struct virtio_device *vdev; - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *req_vq; /* The balloon servicing is delegated to a freezable workqueue. */ struct work_struct update_balloon_stats_work; @@ -83,6 +83,8 @@ struct virtio_balloon { unsigned int nr_page_bmap; /* Used to record the processed pfn range */ unsigned long min_pfn, max_pfn, start_pfn, end_pfn; + /* Request header */ + struct virtio_balloon_req_hdr req_hdr; /* * The pages we've told the Host we're not using are enqueued * at vb_dev_info->pages list. @@ -552,6 +554,63 @@ static void update_balloon_stats(struct virtio_balloon *vb) pages_to_bytes(available)); } +static void send_unused_pages_info(struct virtio_balloon *vb, + unsigned long req_id) +{ + struct scatterlist sg_in; + unsigned long pfn = 0, bmap_len, pfn_limit, last_pfn, nr_pfn; + struct virtqueue *vq = vb->req_vq; + struct virtio_balloon_resp_hdr *hdr = vb->resp_hdr; + int ret = 1, used_nr_bmap = 0, i; + + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP) && + vb->nr_page_bmap == 1) + extend_page_bitmap(vb); + + pfn_limit = PFNS_PER_BMAP * vb->nr_page_bmap; + mutex_lock(&vb->balloon_lock); + last_pfn = get_max_pfn(); + + while (ret) { + clear_page_bitmap(vb); + ret = get_unused_pages(pfn, pfn + pfn_limit, vb->page_bitmap, + PFNS_PER_BMAP, vb->nr_page_bmap); + if (ret < 0) + break; + hdr->cmd = BALLOON_GET_UNUSED_PAGES; + hdr->id = req_id; + bmap_len = BALLOON_BMAP_SIZE * vb->nr_page_bmap; + + if (!ret) { + hdr->flag = BALLOON_FLAG_DONE; + nr_pfn = last_pfn - pfn; + used_nr_bmap = nr_pfn / PFNS_PER_BMAP; + if (nr_pfn % PFNS_PER_BMAP) + used_nr_bmap++; + bmap_len = nr_pfn / BITS_PER_BYTE; + } else { + hdr->flag = BALLOON_FLAG_CONT; + used_nr_bmap = vb->nr_page_bmap; + } + for (i = 0; i < used_nr_bmap; i++) { + unsigned int bmap_size = BALLOON_BMAP_SIZE; + + if (i + 1 == used_nr_bmap) + bmap_size = bmap_len - BALLOON_BMAP_SIZE * i; + set_bulk_pages(vb, vq, pfn + i * PFNS_PER_BMAP, + vb->page_bitmap[i], bmap_size, true); + } + if (vb->resp_pos > 0) + send_resp_data(vb, vq, true); + pfn += pfn_limit; + } + + mutex_unlock(&vb->balloon_lock); + sg_init_one(&sg_in, &vb->req_hdr, sizeof(vb->req_hdr)); + virtqueue_add_inbuf(vq, &sg_in, 1, &vb->req_hdr, GFP_KERNEL); + virtqueue_kick(vq); +} + /* * While most virtqueues communicate guest-initiated requests to the hypervisor, * the stats queue operates in reverse. The driver initializes the virtqueue @@ -686,18 +745,56 @@ static void update_balloon_size_func(struct work_struct *work) queue_work(system_freezable_wq, work); } +static void misc_handle_rq(struct virtio_balloon *vb) +{ + struct virtio_balloon_req_hdr *ptr_hdr; + unsigned int len; + + ptr_hdr = virtqueue_get_buf(vb->req_vq, &len); + if (!ptr_hdr || len != sizeof(vb->req_hdr)) + return; + + switch (ptr_hdr->cmd) { + case BALLOON_GET_UNUSED_PAGES: + send_unused_pages_info(vb, ptr_hdr->param); + break; + default: + break; + } +} + +static void misc_request(struct virtqueue *vq) +{ + struct virtio_balloon *vb = vq->vdev->priv; + + misc_handle_rq(vb); +} + static int init_vqs(struct virtio_balloon *vb) { - struct virtqueue *vqs[3]; - vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request }; - static const char * const names[] = { "inflate", "deflate", "stats" }; + struct virtqueue *vqs[4]; + vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, + stats_request, misc_request }; + static const char * const names[] = { "inflate", "deflate", "stats", + "misc" }; int err, nvqs; /* * We expect two virtqueues: inflate and deflate, and * optionally stat. */ - nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2; + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_HOST_REQ_VQ)) + nvqs = 4; + else if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) + nvqs = 3; + else + nvqs = 2; + + if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_HOST_REQ_VQ); + } + err = vb->vdev->config->find_vqs(vb->vdev, nvqs, vqs, callbacks, names); if (err) return err; @@ -718,6 +815,18 @@ static int init_vqs(struct virtio_balloon *vb) BUG(); virtqueue_kick(vb->stats_vq); } + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_HOST_REQ_VQ)) { + struct scatterlist sg_in; + + vb->req_vq = vqs[3]; + sg_init_one(&sg_in, &vb->req_hdr, sizeof(vb->req_hdr)); + if (virtqueue_add_inbuf(vb->req_vq, &sg_in, 1, + &vb->req_hdr, GFP_KERNEL) < 0) + __virtio_clear_bit(vb->vdev, + VIRTIO_BALLOON_F_HOST_REQ_VQ); + else + virtqueue_kick(vb->req_vq); + } return 0; } @@ -851,11 +960,13 @@ static int virtballoon_probe(struct virtio_device *vdev) vb->resp_hdr = kzalloc(sizeof(struct virtio_balloon_resp_hdr), GFP_KERNEL); /* Clear the feature bit if memory allocation fails */ - if (!vb->resp_hdr) + if (!vb->resp_hdr) { __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); - else { + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_HOST_REQ_VQ); + } else { vb->page_bitmap[0] = kmalloc(BALLOON_BMAP_SIZE, GFP_KERNEL); if (!vb->page_bitmap[0]) { + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_HOST_REQ_VQ); __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); kfree(vb->resp_hdr); } else { @@ -864,6 +975,8 @@ static int virtballoon_probe(struct virtio_device *vdev) if (!vb->resp_data) { __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); + __virtio_clear_bit(vdev, + VIRTIO_BALLOON_F_HOST_REQ_VQ); kfree(vb->page_bitmap[0]); kfree(vb->resp_hdr); } @@ -987,6 +1100,7 @@ static int virtballoon_restore(struct virtio_device *vdev) VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, VIRTIO_BALLOON_F_PAGE_BITMAP, + VIRTIO_BALLOON_F_HOST_REQ_VQ, }; static struct virtio_driver virtio_balloon_driver = {