From patchwork Wed Jun 3 14:49:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11585805 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5A980618 for ; Wed, 3 Jun 2020 14:51:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4125220772 for ; Wed, 3 Jun 2020 14:51:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hpwQU3s+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726558AbgFCOvC (ORCPT ); Wed, 3 Jun 2020 10:51:02 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:50280 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726636AbgFCOvB (ORCPT ); Wed, 3 Jun 2020 10:51:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1591195860; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=x+fp0WxE0UpolDsMkUts/isrPy8C9jhd9TnRkt6Trl4=; b=hpwQU3s+KsWGCdGCBq/LjE9lF/gOhcBgHRy6q5l9ipRou9L/QxmmbVaUyXqgBUMhl0x63E /iZR3ENlT+2sw1HJlFvViPPyiDxpm+mIRVd6uPJ5IhGunJdFK2bpdRw4H+plHVfuD0/GI+ KtVerx8rTWM8Qu1C8kjD5D/xZwtVWSk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-320-9cNRuEqqNJyTeWRj5bPdhw-1; Wed, 03 Jun 2020 10:50:56 -0400 X-MC-Unique: 9cNRuEqqNJyTeWRj5bPdhw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5E6C0461; Wed, 3 Jun 2020 14:50:55 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-192.ams2.redhat.com [10.36.113.192]) by smtp.corp.redhat.com (Postfix) with ESMTP id 81EAC5D9CD; Wed, 3 Jun 2020 14:50:53 +0000 (UTC) From: David Hildenbrand To: qemu-devel@nongnu.org Cc: kvm@vger.kernel.org, qemu-s390x@nongnu.org, Richard Henderson , Paolo Bonzini , "Dr . David Alan Gilbert" , Eduardo Habkost , "Michael S . Tsirkin" , David Hildenbrand Subject: [PATCH v3 19/20] virtio-mem: Add trace events Date: Wed, 3 Jun 2020 16:49:13 +0200 Message-Id: <20200603144914.41645-20-david@redhat.com> In-Reply-To: <20200603144914.41645-1-david@redhat.com> References: <20200603144914.41645-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Let's add some trace events that might come in handy later. Cc: "Michael S. Tsirkin" Cc: "Dr. David Alan Gilbert" Signed-off-by: David Hildenbrand --- hw/virtio/trace-events | 10 ++++++++++ hw/virtio/virtio-mem.c | 10 +++++++++- 2 files changed, 19 insertions(+), 1 deletion(-) diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events index e83500bee9..c40ad5ea27 100644 --- a/hw/virtio/trace-events +++ b/hw/virtio/trace-events @@ -73,3 +73,13 @@ virtio_iommu_get_domain(uint32_t domain_id) "Alloc domain=%d" virtio_iommu_put_domain(uint32_t domain_id) "Free domain=%d" virtio_iommu_translate_out(uint64_t virt_addr, uint64_t phys_addr, uint32_t sid) "0x%"PRIx64" -> 0x%"PRIx64 " for sid=%d" virtio_iommu_report_fault(uint8_t reason, uint32_t flags, uint32_t endpoint, uint64_t addr) "FAULT reason=%d flags=%d endpoint=%d address =0x%"PRIx64 + +# virtio-mem.c +virtio_mem_send_response(uint16_t type) "type=%" PRIu16 +virtio_mem_plug_request(uint64_t addr, uint16_t nb_blocks) "addr=0x%" PRIx64 " nb_blocks=%" PRIu16 +virtio_mem_unplug_request(uint64_t addr, uint16_t nb_blocks) "addr=0x%" PRIx64 " nb_blocks=%" PRIu16 +virtio_mem_unplugged_all(void) "" +virtio_mem_unplug_all_request(void) "" +virtio_mem_resized_usable_region(uint64_t old_size, uint64_t new_size) "old_size=0x%" PRIx64 "new_size=0x%" PRIx64 +virtio_mem_state_request(uint64_t addr, uint16_t nb_blocks) "addr=0x%" PRIx64 " nb_blocks=%" PRIu16 +virtio_mem_state_response(uint16_t state) "state=%" PRIu16 diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index 158215613c..4d0a2e78c0 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -30,6 +30,7 @@ #include "hw/boards.h" #include "hw/qdev-properties.h" #include "config-devices.h" +#include "trace.h" /* * Use QEMU_VMALLOC_ALIGN, so no THP will have to be split when unplugging @@ -100,6 +101,7 @@ static void virtio_mem_send_response(VirtIOMEM *vmem, VirtQueueElement *elem, VirtIODevice *vdev = VIRTIO_DEVICE(vmem); VirtQueue *vq = vmem->vq; + trace_virtio_mem_send_response(le16_to_cpu(resp->type)); iov_from_buf(elem->in_sg, elem->in_num, 0, resp, sizeof(*resp)); virtqueue_push(vq, elem, sizeof(*resp)); @@ -195,6 +197,7 @@ static void virtio_mem_plug_request(VirtIOMEM *vmem, VirtQueueElement *elem, const uint16_t nb_blocks = le16_to_cpu(req->u.plug.nb_blocks); uint16_t type; + trace_virtio_mem_plug_request(gpa, nb_blocks); type = virtio_mem_state_change_request(vmem, gpa, nb_blocks, true); virtio_mem_send_response_simple(vmem, elem, type); } @@ -206,6 +209,7 @@ static void virtio_mem_unplug_request(VirtIOMEM *vmem, VirtQueueElement *elem, const uint16_t nb_blocks = le16_to_cpu(req->u.unplug.nb_blocks); uint16_t type; + trace_virtio_mem_unplug_request(gpa, nb_blocks); type = virtio_mem_state_change_request(vmem, gpa, nb_blocks, false); virtio_mem_send_response_simple(vmem, elem, type); } @@ -225,6 +229,7 @@ static void virtio_mem_resize_usable_region(VirtIOMEM *vmem, return; } + trace_virtio_mem_resized_usable_region(vmem->usable_region_size, newsize); vmem->usable_region_size = newsize; } @@ -247,7 +252,7 @@ static int virtio_mem_unplug_all(VirtIOMEM *vmem) vmem->size = 0; notifier_list_notify(&vmem->size_change_notifiers, &vmem->size); } - + trace_virtio_mem_unplugged_all(); virtio_mem_resize_usable_region(vmem, vmem->requested_size, true); return 0; } @@ -255,6 +260,7 @@ static int virtio_mem_unplug_all(VirtIOMEM *vmem) static void virtio_mem_unplug_all_request(VirtIOMEM *vmem, VirtQueueElement *elem) { + trace_virtio_mem_unplug_all_request(); if (virtio_mem_unplug_all(vmem)) { virtio_mem_send_response_simple(vmem, elem, VIRTIO_MEM_RESP_BUSY); } else { @@ -272,6 +278,7 @@ static void virtio_mem_state_request(VirtIOMEM *vmem, VirtQueueElement *elem, .type = cpu_to_le16(VIRTIO_MEM_RESP_ACK), }; + trace_virtio_mem_state_request(gpa, nb_blocks); if (!virtio_mem_valid_range(vmem, gpa, size)) { virtio_mem_send_response_simple(vmem, elem, VIRTIO_MEM_RESP_ERROR); return; @@ -284,6 +291,7 @@ static void virtio_mem_state_request(VirtIOMEM *vmem, VirtQueueElement *elem, } else { resp.u.state.state = cpu_to_le16(VIRTIO_MEM_STATE_MIXED); } + trace_virtio_mem_state_response(le16_to_cpu(resp.u.state.state)); virtio_mem_send_response(vmem, elem, &resp); }