From patchwork Wed May 20 12:31:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11560179 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2C2E690 for ; Wed, 20 May 2020 12:33:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0F172207C4 for ; Wed, 20 May 2020 12:33:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="JGdiGmPe" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727013AbgETMdc (ORCPT ); Wed, 20 May 2020 08:33:32 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:42669 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726693AbgETMdb (ORCPT ); Wed, 20 May 2020 08:33:31 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589978009; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cP0lsWezPxtotLvHfeqEt/U+gfIXto0JaGKHL4YU9vc=; b=JGdiGmPeAr9smrl6HC1KAsgBVkZKqMRI9ACBQOH+ucY5nBrgEKtlgVFlYoLhr/jwNv5PAa vGe769jw/J6kOfRJOdXGgM999ET7vGD0OENr3z+m6Wd5x3WOGYYLxy/Q5cIgrgpKPGWT/p hmqrhoxWJ3FE8Ha2AtksZah48gRw8MU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-10-zfbSSGieMeaWIg5VlXbHUg-1; Wed, 20 May 2020 08:33:24 -0400 X-MC-Unique: zfbSSGieMeaWIg5VlXbHUg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CFC301005510; Wed, 20 May 2020 12:33:23 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-76.ams2.redhat.com [10.36.113.76]) by smtp.corp.redhat.com (Postfix) with ESMTP id F085E6246F; Wed, 20 May 2020 12:33:21 +0000 (UTC) From: David Hildenbrand To: qemu-devel@nongnu.org Cc: kvm@vger.kernel.org, qemu-s390x@nongnu.org, Richard Henderson , Paolo Bonzini , "Dr . David Alan Gilbert" , Eduardo Habkost , "Michael S . Tsirkin" , David Hildenbrand Subject: [PATCH v2 19/19] virtio-mem: Add trace events Date: Wed, 20 May 2020 14:31:52 +0200 Message-Id: <20200520123152.60527-20-david@redhat.com> In-Reply-To: <20200520123152.60527-1-david@redhat.com> References: <20200520123152.60527-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Let's add some trace events that might come in handy later. Cc: "Michael S. Tsirkin" Cc: "Dr. David Alan Gilbert" Signed-off-by: David Hildenbrand --- hw/virtio/trace-events | 10 ++++++++++ hw/virtio/virtio-mem.c | 10 +++++++++- 2 files changed, 19 insertions(+), 1 deletion(-) diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events index e83500bee9..c40ad5ea27 100644 --- a/hw/virtio/trace-events +++ b/hw/virtio/trace-events @@ -73,3 +73,13 @@ virtio_iommu_get_domain(uint32_t domain_id) "Alloc domain=%d" virtio_iommu_put_domain(uint32_t domain_id) "Free domain=%d" virtio_iommu_translate_out(uint64_t virt_addr, uint64_t phys_addr, uint32_t sid) "0x%"PRIx64" -> 0x%"PRIx64 " for sid=%d" virtio_iommu_report_fault(uint8_t reason, uint32_t flags, uint32_t endpoint, uint64_t addr) "FAULT reason=%d flags=%d endpoint=%d address =0x%"PRIx64 + +# virtio-mem.c +virtio_mem_send_response(uint16_t type) "type=%" PRIu16 +virtio_mem_plug_request(uint64_t addr, uint16_t nb_blocks) "addr=0x%" PRIx64 " nb_blocks=%" PRIu16 +virtio_mem_unplug_request(uint64_t addr, uint16_t nb_blocks) "addr=0x%" PRIx64 " nb_blocks=%" PRIu16 +virtio_mem_unplugged_all(void) "" +virtio_mem_unplug_all_request(void) "" +virtio_mem_resized_usable_region(uint64_t old_size, uint64_t new_size) "old_size=0x%" PRIx64 "new_size=0x%" PRIx64 +virtio_mem_state_request(uint64_t addr, uint16_t nb_blocks) "addr=0x%" PRIx64 " nb_blocks=%" PRIu16 +virtio_mem_state_response(uint16_t state) "state=%" PRIu16 diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index d863f336e8..87502b9989 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -30,6 +30,7 @@ #include "hw/boards.h" #include "hw/qdev-properties.h" #include "config-devices.h" +#include "trace.h" /* * Use QEMU_VMALLOC_ALIGN, so no THP will have to be split when unplugging @@ -94,6 +95,7 @@ static void virtio_mem_send_response(VirtIOMEM *vmem, VirtQueueElement *elem, VirtIODevice *vdev = VIRTIO_DEVICE(vmem); VirtQueue *vq = vmem->vq; + trace_virtio_mem_send_response(le16_to_cpu(resp->type)); iov_from_buf(elem->in_sg, elem->in_num, 0, resp, sizeof(*resp)); virtqueue_push(vq, elem, sizeof(*resp)); @@ -188,6 +190,7 @@ static void virtio_mem_plug_request(VirtIOMEM *vmem, VirtQueueElement *elem, const uint16_t nb_blocks = le16_to_cpu(req->u.plug.nb_blocks); uint16_t type; + trace_virtio_mem_plug_request(gpa, nb_blocks); type = virtio_mem_state_change_request(vmem, gpa, nb_blocks, true); virtio_mem_send_response_simple(vmem, elem, type); } @@ -199,6 +202,7 @@ static void virtio_mem_unplug_request(VirtIOMEM *vmem, VirtQueueElement *elem, const uint16_t nb_blocks = le16_to_cpu(req->u.unplug.nb_blocks); uint16_t type; + trace_virtio_mem_unplug_request(gpa, nb_blocks); type = virtio_mem_state_change_request(vmem, gpa, nb_blocks, false); virtio_mem_send_response_simple(vmem, elem, type); } @@ -215,6 +219,7 @@ static void virtio_mem_resize_usable_region(VirtIOMEM *vmem, return; } + trace_virtio_mem_resized_usable_region(vmem->usable_region_size, newsize); vmem->usable_region_size = newsize; } @@ -237,7 +242,7 @@ static int virtio_mem_unplug_all(VirtIOMEM *vmem) vmem->size = 0; notifier_list_notify(&vmem->size_change_notifiers, &vmem->size); } - + trace_virtio_mem_unplugged_all(); virtio_mem_resize_usable_region(vmem, vmem->requested_size, true); return 0; } @@ -245,6 +250,7 @@ static int virtio_mem_unplug_all(VirtIOMEM *vmem) static void virtio_mem_unplug_all_request(VirtIOMEM *vmem, VirtQueueElement *elem) { + trace_virtio_mem_unplug_all_request(); if (virtio_mem_unplug_all(vmem)) { virtio_mem_send_response_simple(vmem, elem, VIRTIO_MEM_RESP_BUSY); } else { @@ -262,6 +268,7 @@ static void virtio_mem_state_request(VirtIOMEM *vmem, VirtQueueElement *elem, .type = cpu_to_le16(VIRTIO_MEM_RESP_ACK), }; + trace_virtio_mem_state_request(gpa, nb_blocks); if (!virtio_mem_valid_range(vmem, gpa, size)) { virtio_mem_send_response_simple(vmem, elem, VIRTIO_MEM_RESP_ERROR); return; @@ -274,6 +281,7 @@ static void virtio_mem_state_request(VirtIOMEM *vmem, VirtQueueElement *elem, } else { resp.u.state.state = cpu_to_le16(VIRTIO_MEM_STATE_MIXED); } + trace_virtio_mem_state_response(le16_to_cpu(resp.u.state.state)); virtio_mem_send_response(vmem, elem, &resp); }