From patchwork Tue Mar 26 15:06:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tzvetomir Stoyanov X-Patchwork-Id: 10871345 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9583E14DE for ; Tue, 26 Mar 2019 15:07:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8130E287F2 for ; Tue, 26 Mar 2019 15:07:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7F48F28991; Tue, 26 Mar 2019 15:07:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1927B287F2 for ; Tue, 26 Mar 2019 15:07:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731560AbfCZPHC (ORCPT ); Tue, 26 Mar 2019 11:07:02 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:36021 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726111AbfCZPHB (ORCPT ); Tue, 26 Mar 2019 11:07:01 -0400 Received: by mail-wm1-f68.google.com with SMTP id h18so13340824wml.1 for ; Tue, 26 Mar 2019 08:07:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bXdZcegVi6UJa6w5kVdIBecWQKNoeZvUZ5CztYC7GEE=; b=eL1hWfsW/79WDWjqNhzW9TcibjBWgIRvolRGc81WyoKCIBUkkKik0VAXf7jX30/idB HTFX1vVplMzvaRpxeSXKWyiJ5TxYXdMYmR8ScPAzB4G6GSsVv7++fPnY9pnnlVUetS0O KFML/3OjMGpg+emg1WzclioiEHvWkBsLfzhEnRLDaZTIZNJi6CYtDjZlOI0dUQ2wXoBA 378T1ObpYkptVofizSuBTlrWPlPK0PRmUeaPVNag9MOxdF3hrncxWh0JqpkV86xVtifW x5YNdXTErPoCYB1N3+OEMCXhsFz0s09O2jhfxBLYZDbLSgqg7VEQGxP+ePppy5nZr86b XTIA== X-Gm-Message-State: APjAAAWtP3C0JyFIVQZfZWDq9Rfkjdm/FnA5/KYIPippyb2sBdFblozm pWv/jDLtKvckN0I1tfxpm4uKiiSD X-Google-Smtp-Source: APXvYqyD7DY7B5iTsrVVOURo9ltSL5snvdsLkFu3ZYTBDIAPsVx7mDBcgl3KFjB3IxL6MTSisqzc8w== X-Received: by 2002:a1c:6a0d:: with SMTP id f13mr14951732wmc.76.1553612820158; Tue, 26 Mar 2019 08:07:00 -0700 (PDT) Received: from oberon.eng.vmware.com ([146.247.46.5]) by smtp.gmail.com with ESMTPSA id 13sm6833631wmf.23.2019.03.26.08.06.59 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 26 Mar 2019 08:06:59 -0700 (PDT) From: Tzvetomir Stoyanov To: rostedt@goodmis.org Cc: linux-trace-devel@vger.kernel.org Subject: [PATCH v10 6/9] trace-cmd: Find and store pids of tasks, which run virtual CPUs of given VM Date: Tue, 26 Mar 2019 17:06:48 +0200 Message-Id: <20190326150651.25811-7-tstoyanov@vmware.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190326150651.25811-1-tstoyanov@vmware.com> References: <20190326150651.25811-1-tstoyanov@vmware.com> MIME-Version: 1.0 Sender: linux-trace-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-trace-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In order to match host and guest events, a mapping between guest VCPU and the host task, running this VCPU is needed. Extended existing struct guest to hold such mapping and added logic in read_qemu_guests() function to initialize it. Implemented a new internal API, get_guest_vcpu_pid(), to retrieve VCPU-task mapping for given VM. Signed-off-by: Tzvetomir Stoyanov --- tracecmd/include/trace-local.h | 1 + tracecmd/trace-record.c | 57 ++++++++++++++++++++++++++++++++++ 2 files changed, 58 insertions(+) diff --git a/tracecmd/include/trace-local.h b/tracecmd/include/trace-local.h index 8413054..62f5e47 100644 --- a/tracecmd/include/trace-local.h +++ b/tracecmd/include/trace-local.h @@ -245,6 +245,7 @@ int tracecmd_local_cpu_count(void); void tracecmd_set_clock(struct buffer_instance *instance); void tracecmd_remove_instance(struct buffer_instance *instance); +int get_guest_vcpu_pid(unsigned int guest_cid, unsigned int guest_vcpu); /* No longer in event-utils.h */ void __noreturn die(const char *fmt, ...); /* Can be overriden */ void *malloc_or_die(unsigned int size); /* Can be overridden */ diff --git a/tracecmd/trace-record.c b/tracecmd/trace-record.c index eacf7d2..72a8ed3 100644 --- a/tracecmd/trace-record.c +++ b/tracecmd/trace-record.c @@ -2746,10 +2746,12 @@ static bool is_digits(const char *s) return true; } +#define VCPUS_MAX 256 struct guest { char *name; int cid; int pid; + int cpu_pid[VCPUS_MAX]; }; static struct guest *guests; @@ -2767,6 +2769,46 @@ static char *get_qemu_guest_name(char *arg) return arg; } +static void read_qemu_guests_pids(char *guest_task, struct guest *guest) +{ + struct dirent *entry_t; + char path[PATH_MAX]; + char *buf = NULL; + size_t n = 0; + int vcpu; + DIR *dir; + FILE *ft; + + snprintf(path, sizeof(path), "/proc/%s/task", guest_task); + dir = opendir(path); + if (!dir) + return; + + while ((entry_t = readdir(dir))) { + if (!(entry_t->d_type == DT_DIR && is_digits(entry_t->d_name))) + continue; + + snprintf(path, sizeof(path), "/proc/%s/task/%s/comm", + guest_task, entry_t->d_name); + ft = fopen(path, "r"); + if (!ft) + continue; + if (getline(&buf, &n, ft) < 0) + goto next; + if (strncmp(buf, "CPU ", 4) != 0) + goto next; + + vcpu = atoi(buf+4); + if (!(vcpu >= 0 && vcpu < VCPUS_MAX)) + goto next; + guest->cpu_pid[vcpu] = atoi(entry_t->d_name); + +next: + fclose(ft); + } + free(buf); +} + static void read_qemu_guests(void) { static bool initialized; @@ -2828,6 +2870,8 @@ static void read_qemu_guests(void) if (!is_qemu) goto next; + read_qemu_guests_pids(entry->d_name, &guest); + guests = realloc(guests, (guests_len + 1) * sizeof(*guests)); if (!guests) die("Can not allocate guest buffer"); @@ -2873,6 +2917,19 @@ static char *parse_guest_name(char *guest, int *cid, int *port) return guest; } +int get_guest_vcpu_pid(unsigned int guest_cid, unsigned int guest_vcpu) +{ + int i; + + if (!guests || guest_vcpu >= VCPUS_MAX) + return -1; + + for (i = 0; i < guests_len; i++) + if (guest_cid == guests[i].cid) + return guests[i].cpu_pid[guest_vcpu]; + return -1; +} + static void set_prio(int prio) { struct sched_param sp;