From patchwork Thu Feb 27 00:28:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chun-Tse Shao X-Patchwork-Id: 13993360 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B42E7C2C9 for ; Thu, 27 Feb 2025 00:35:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740616502; cv=none; b=Tj/YebE7jw/k+FlPB2VjsxYjo2Gw+0Uyk8wkjB8jlRev8/n4MX70Q4fOmLetnkKjVl/hN5OjXdEnNVNGVSh8uFKHjSLyXIeetovxXulxs4+QsN0gwxfdGpDp+IMt8DmduybPhAgtemuElbqXZFFfKdOmoWiO4PlSrOCjhMoIeb8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740616502; c=relaxed/simple; bh=uMnqvAw20MjO8y1DkOGmvKZ76yZAmQWMHc3q5tVE6c4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=t5l2u7ic97w5D0+sR2Uijn8jJGKu8IE1tAw2JqNC/FjiR1cO75++IZb/dWrZ5Jp2ll63NkohU4nCaQ/I5XdkLDyHIYuFz4+AGSqER6GJxZ0kVfGUxhvapVvaSWn5Ar5Xz3A2AmjP+KCB4a+yKdaYe7twJUXzTRQaWYnBbxvlX70= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ctshao.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=d/+ebxt1; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ctshao.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="d/+ebxt1" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-21f6cb3097bso7262365ad.3 for ; Wed, 26 Feb 2025 16:35:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740616500; x=1741221300; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3LIHL/AK+Ozu93SKs05RSuuBRbA8GccmzuqZN2pzdx8=; b=d/+ebxt1taEkk7kAsAi0BQufMNbg9dO/Ct8hz7uqfxpMYdOBTjRARCN+JWwxMSA1KI ypjlnyH1XOBvfji3SeQmHfk9nH/1U6FJX1ZBaY2C1HoZ8qLAmPFxCqKAbyuNzjItb7J5 QrCwYkrsoa88IYOamutKr3wezcnLhd7IMggB9Fpq7UfQLRDUML3YGi5dioTIpAqBk+Vr KfGTeWmuGElQUNLtML4gk+Qq4BZWVTG0A+nOQvwIljePIyA6eK5/tbSTIAyTmXj3uUF4 f84y6g6UHworGlIJUQFXjzszdGQa28PFvzhWV9qqU+1cKa8grFmxb9Cg/LlSQazE0jFG vbZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740616500; x=1741221300; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3LIHL/AK+Ozu93SKs05RSuuBRbA8GccmzuqZN2pzdx8=; b=N6tren7wqJTcBWAdduJkVtXEHzNw8/WgZ+M0QmA7AXutN0O4YlY5ki3eZPkp80o6il 30wGoN7AbtcWJYYUnnLbPeAlO8kCTIyIgnXycIsAnFoKZIAy2n8PuvUP5UVaSZwUGZPx O4dMMu7jG7vz7Tb5oOE716BerTIDmEEF0PKB6IbtrnJrmJyizGeDVIfj6KZOBgrUlvNC tyTyw++An9Ji1xl/GsOrSdp7TW1nXvZ66m1fGJgiPVMLnpfdhNx9qgIWz8ALf7TxZqWY 9Cj+fHlqxU4Mb+ZSpegnYBJXOhzQ32bUZvFwcgZEGgj4QOoz34K74SDNvvZ0MSaFpQoD ID/g== X-Forwarded-Encrypted: i=1; AJvYcCUHK9zHnMSZR5JDI1GLOREWXveA0ExEmVftq6S9VVBryQivUAnwz2LDKPIx+W02f/UF8gQ=@vger.kernel.org X-Gm-Message-State: AOJu0Ywz7VXQbOVODeSBhDu6sscQkb0JvmzYtV7a4NH2Thvy6Rsg+GAq iKiFQOnDt0JC+bFBYfHuSgKpEwXg7zhmzLkSoQBfKIGF3mvBeZbPXOXjIBZlYm9gpwyumWUrHXH asg== X-Google-Smtp-Source: AGHT+IGzqpdRIlXGVg5tN41iPkn8q/I+EgwjsABLkG5nK/bxhoYAQCfVKFZZej6m6MOjXo0szVKcrocvD50= X-Received: from plbje3.prod.google.com ([2002:a17:903:2643:b0:223:4e55:d29a]) (user=ctshao job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:2f92:b0:220:e896:54e1 with SMTP id d9443c01a7336-221a10de4fdmr348252035ad.26.1740616498752; Wed, 26 Feb 2025 16:34:58 -0800 (PST) Date: Wed, 26 Feb 2025 16:28:53 -0800 In-Reply-To: <20250227003359.732948-1-ctshao@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250227003359.732948-1-ctshao@google.com> X-Mailer: git-send-email 2.48.1.658.g4767266eb4-goog Message-ID: <20250227003359.732948-2-ctshao@google.com> Subject: [PATCH v8 1/4] perf lock: Add bpf maps for owner stack tracing From: Chun-Tse Shao To: linux-kernel@vger.kernel.org Cc: Chun-Tse Shao , peterz@infradead.org, mingo@redhat.com, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, nick.forrington@arm.com, linux-perf-users@vger.kernel.org, bpf@vger.kernel.org Add a struct and few bpf maps in order to tracing owner stack. `struct owner_tracing_data`: Contains owner's pid, stack id, timestamp for when the owner acquires lock, and the count of lock waiters. `stack_buf`: Percpu buffer for retrieving owner stacktrace. `owner_stacks`: For tracing owner stacktrace to customized owner stack id. `owner_data`: For tracing lock_address to `struct owner_tracing_data` in bpf program. `owner_stat`: For reporting owner stacktrace in usermode. Signed-off-by: Chun-Tse Shao --- tools/perf/util/bpf_lock_contention.c | 14 ++++++-- .../perf/util/bpf_skel/lock_contention.bpf.c | 33 +++++++++++++++++++ tools/perf/util/bpf_skel/lock_data.h | 7 ++++ 3 files changed, 52 insertions(+), 2 deletions(-) diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c index fc8666222399..76542b86e83f 100644 --- a/tools/perf/util/bpf_lock_contention.c +++ b/tools/perf/util/bpf_lock_contention.c @@ -131,10 +131,20 @@ int lock_contention_prepare(struct lock_contention *con) else bpf_map__set_max_entries(skel->maps.task_data, 1); - if (con->save_callstack) + if (con->save_callstack) { bpf_map__set_max_entries(skel->maps.stacks, con->map_nr_entries); - else + if (con->owner) { + bpf_map__set_value_size(skel->maps.stack_buf, con->max_stack * sizeof(u64)); + bpf_map__set_key_size(skel->maps.owner_stacks, + con->max_stack * sizeof(u64)); + bpf_map__set_max_entries(skel->maps.owner_stacks, con->map_nr_entries); + bpf_map__set_max_entries(skel->maps.owner_data, con->map_nr_entries); + bpf_map__set_max_entries(skel->maps.owner_stat, con->map_nr_entries); + skel->rodata->max_stack = con->max_stack; + } + } else { bpf_map__set_max_entries(skel->maps.stacks, 1); + } if (target__has_cpu(target)) { skel->rodata->has_cpu = 1; diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c index 6533ea9b044c..23fe9cc980ae 100644 --- a/tools/perf/util/bpf_skel/lock_contention.bpf.c +++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c @@ -27,6 +27,38 @@ struct { __uint(max_entries, MAX_ENTRIES); } stacks SEC(".maps"); +/* buffer for owner stacktrace */ +struct { + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u64)); + __uint(max_entries, 1); +} stack_buf SEC(".maps"); + +/* a map for tracing owner stacktrace to owner stack id */ +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(key_size, sizeof(__u64)); // owner stacktrace + __uint(value_size, sizeof(__s32)); // owner stack id + __uint(max_entries, 1); +} owner_stacks SEC(".maps"); + +/* a map for tracing lock address to owner data */ +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(key_size, sizeof(__u64)); // lock address + __uint(value_size, sizeof(struct owner_tracing_data)); + __uint(max_entries, 1); +} owner_data SEC(".maps"); + +/* a map for contention_key (stores owner stack id) to contention data */ +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(key_size, sizeof(struct contention_key)); + __uint(value_size, sizeof(struct contention_data)); + __uint(max_entries, 1); +} owner_stat SEC(".maps"); + /* maintain timestamp at the beginning of contention */ struct { __uint(type, BPF_MAP_TYPE_HASH); @@ -143,6 +175,7 @@ const volatile int needs_callstack; const volatile int stack_skip; const volatile int lock_owner; const volatile int use_cgroup_v2; +const volatile int max_stack; /* determine the key of lock stat */ const volatile int aggr_mode; diff --git a/tools/perf/util/bpf_skel/lock_data.h b/tools/perf/util/bpf_skel/lock_data.h index c15f734d7fc4..15f5743bd409 100644 --- a/tools/perf/util/bpf_skel/lock_data.h +++ b/tools/perf/util/bpf_skel/lock_data.h @@ -3,6 +3,13 @@ #ifndef UTIL_BPF_SKEL_LOCK_DATA_H #define UTIL_BPF_SKEL_LOCK_DATA_H +struct owner_tracing_data { + u32 pid; // Who has the lock. + u32 count; // How many waiters for this lock. + u64 timestamp; // The time while the owner acquires lock and contention is going on. + s32 stack_id; // Identifier for `owner_stat`, which stores as value in `owner_stacks` +}; + struct tstamp_data { u64 timestamp; u64 lock; From patchwork Thu Feb 27 00:28:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chun-Tse Shao X-Patchwork-Id: 13993364 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C3DA18AE2 for ; Thu, 27 Feb 2025 00:35:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740616549; cv=none; b=pj4qdIHvlRX7fiTFK3TBcc4LwqISQJNdVh+lvzX31s3hi44vULzNWGzeLd8Fgk5tAzR1JXxMs/Z6Sncra9Hi0XD3vdGl7HNNMB6fHUbsYVmtTS8F/8VC1mvKpHHV2A9ch153g/3D3ECSDA8LoqVxfa5oa9zUxrQm1RZABWiGhIM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740616549; c=relaxed/simple; bh=sX6k+ExVgGNyDnumdmJIdayUEA+By8ga08Ti0Wfuoc8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YOeAbt6WBirTGOx1CQOuvX5bHKStMpmcLPiyDVlid5L1uT/DvH5eeia8Fw8OVpW1X9IQaBvAJLqVCJgnP/RfEdJt5tsk1PF857xvr0Y4Vauuz7ZvmW+8UYnRucpTVd9Tjj3nZRqxzVDGPWz0Cfl6FJ2B/AiNH0cALch+YmnfWqU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ctshao.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=M5+WEscc; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ctshao.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="M5+WEscc" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-220fff23644so8069285ad.0 for ; Wed, 26 Feb 2025 16:35:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740616546; x=1741221346; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uT5qN4teqIhN8OgGcN83osIEOThqIeAxronJTF18w94=; b=M5+WEsccYohiiBTdZqYHETvsu2q8+CjZJ3xsRBAyqY6nl86sk7hMwHqGOI2RRLhh49 j62A3Vj2j/7IKNf2ABtPbPShSFe6eCUp8oqtjQmhG+J3t0fLtLI+mcwCsaMeSnUNGIyv /xsKDmOgLaKLskJY5yFbYjTOLXD0j0CBpOuSw8jaTLinzNa8ix1EBKGQ573rHGiR7f8z GETBzLL4Oea9+Z0+XRnNZ0/zc2pNVodIQ4P1PiYq8p8+pVIBKnvXKastsdjY3j0yvTXL lL4w3MfdXbN3dq9bDdl9r5IyAph+XNhezUCjPm/2NCZstgvQB5OY/DHp4Ohr+9fjPq7r U14Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740616546; x=1741221346; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uT5qN4teqIhN8OgGcN83osIEOThqIeAxronJTF18w94=; b=KHH8ojmDIdZJg6vJPiigGzLn1SeGGE/6v7rZoZwsMnI6rUuAQ28OBzZybSg2NgpuEA WNC3mCThGUABN3brqyAtNLuZbNZKF/emOcaGjERPMu+7LpiE9U11UrZA1woNVkkPSh1S 5n7XvKP0HLqiR7uF/WoBX0e4JHZ585kTKoXT+EM8uI3h1lu2toGGFnRa2TnQFtzgcdts TxF6Bw7pJgz3EwxJH8GjQZpqarC8KAjowSqE+mfbXMrn+A8BcgQWuFPAc9AySYDW2iIk ybTEMFzeFYFcsf2M3EiwW6pODZc+cDNX/2gwxUhN0noEo0xpqOrAvKz9xaL0iYuMrUC2 Ktvg== X-Forwarded-Encrypted: i=1; AJvYcCW4xbGERhbwn3Tj+oEQC1SFplKSjxAE63X0s94O8C9MFSx2rF7mRA71U92PlMx/wklX1iA=@vger.kernel.org X-Gm-Message-State: AOJu0Yx/kC7ZeLzOIcvsWY74/fv95aubuHNPJcnhmrWfActe7zb440QR qMtTiKufxMd72J/yTZD8E5JVdOD3JBGpmiHU7gC3CMW+hTKt54mVWAk6a92jPhPHBoH/M3pUwqw c9A== X-Google-Smtp-Source: AGHT+IEhVou1kAichbSUs8mzNp9wPwBY0b2D1W3ERA7S8+Dol9j5TmEUBzuy/uk5ihEM3zbiUALvmCyQzkQ= X-Received: from pfuw10.prod.google.com ([2002:a05:6a00:14ca:b0:732:9235:5f2]) (user=ctshao job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:1709:b0:732:2967:400 with SMTP id d2e1a72fcca58-7347910977emr14896254b3a.12.1740616546640; Wed, 26 Feb 2025 16:35:46 -0800 (PST) Date: Wed, 26 Feb 2025 16:28:54 -0800 In-Reply-To: <20250227003359.732948-1-ctshao@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250227003359.732948-1-ctshao@google.com> X-Mailer: git-send-email 2.48.1.658.g4767266eb4-goog Message-ID: <20250227003359.732948-3-ctshao@google.com> Subject: [PATCH v8 2/4] perf lock: Retrieve owner callstack in bpf program From: Chun-Tse Shao To: linux-kernel@vger.kernel.org Cc: Chun-Tse Shao , peterz@infradead.org, mingo@redhat.com, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, nick.forrington@arm.com, linux-perf-users@vger.kernel.org, bpf@vger.kernel.org This implements per-callstack aggregation of lock owners in addition to per-thread. The owner callstack is captured using `bpf_get_task_stack()` at `contention_begin()` and it also adds a custom stackid function for the owner stacks to be compared easily. The owner info is kept in a hash map using lock addr as a key to handle multiple waiters for the same lock. At `contention_end()`, it updates the owner lock stat based on the info that was saved at `contention_begin()`. If there are more waiters, it'd update the owner pid to itself as `contention_end()` means it gets the lock now. But it also needs to check the return value of the lock function in case task was killed by a signal or something. Signed-off-by: Chun-Tse Shao --- .../perf/util/bpf_skel/lock_contention.bpf.c | 212 +++++++++++++++++- 1 file changed, 203 insertions(+), 9 deletions(-) diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c index 23fe9cc980ae..69be7a4234e0 100644 --- a/tools/perf/util/bpf_skel/lock_contention.bpf.c +++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c @@ -197,6 +197,9 @@ int data_fail; int task_map_full; int data_map_full; +struct task_struct *bpf_task_from_pid(s32 pid) __ksym __weak; +void bpf_task_release(struct task_struct *p) __ksym __weak; + static inline __u64 get_current_cgroup_id(void) { struct task_struct *task; @@ -420,6 +423,61 @@ static inline struct tstamp_data *get_tstamp_elem(__u32 flags) return pelem; } +static inline s32 get_owner_stack_id(u64 *stacktrace) +{ + s32 *id, new_id; + static s64 id_gen = 1; + + id = bpf_map_lookup_elem(&owner_stacks, stacktrace); + if (id) + return *id; + + new_id = (s32)__sync_fetch_and_add(&id_gen, 1); + + bpf_map_update_elem(&owner_stacks, stacktrace, &new_id, BPF_NOEXIST); + + id = bpf_map_lookup_elem(&owner_stacks, stacktrace); + if (id) + return *id; + + return -1; +} + +static inline void update_contention_data(struct contention_data *data, u64 duration, u32 count) +{ + __sync_fetch_and_add(&data->total_time, duration); + __sync_fetch_and_add(&data->count, count); + + /* FIXME: need atomic operations */ + if (data->max_time < duration) + data->max_time = duration; + if (data->min_time > duration) + data->min_time = duration; +} + +static inline void update_owner_stat(u32 id, u64 duration, u32 flags) +{ + struct contention_key key = { + .stack_id = id, + .pid = 0, + .lock_addr_or_cgroup = 0, + }; + struct contention_data *data = bpf_map_lookup_elem(&owner_stat, &key); + + if (!data) { + struct contention_data first = { + .total_time = duration, + .max_time = duration, + .min_time = duration, + .count = 1, + .flags = flags, + }; + bpf_map_update_elem(&owner_stat, &key, &first, BPF_NOEXIST); + } else { + update_contention_data(data, duration, 1); + } +} + SEC("tp_btf/contention_begin") int contention_begin(u64 *ctx) { @@ -437,6 +495,72 @@ int contention_begin(u64 *ctx) pelem->flags = (__u32)ctx[1]; if (needs_callstack) { + u32 i = 0; + u32 id = 0; + int owner_pid; + u64 *buf; + struct task_struct *task; + struct owner_tracing_data *otdata; + + if (!lock_owner) + goto skip_owner; + + task = get_lock_owner(pelem->lock, pelem->flags); + if (!task) + goto skip_owner; + + owner_pid = BPF_CORE_READ(task, pid); + + buf = bpf_map_lookup_elem(&stack_buf, &i); + if (!buf) + goto skip_owner; + for (i = 0; i < max_stack; i++) + buf[i] = 0x0; + + if (!bpf_task_from_pid) + goto skip_owner; + + task = bpf_task_from_pid(owner_pid); + if (!task) + goto skip_owner; + + bpf_get_task_stack(task, buf, max_stack * sizeof(unsigned long), 0); + bpf_task_release(task); + + otdata = bpf_map_lookup_elem(&owner_data, &pelem->lock); + id = get_owner_stack_id(buf); + + /* + * Contention just happens, or corner case `lock` is owned by process not + * `owner_pid`. For the corner case we treat it as unexpected internal error and + * just ignore the precvious tracing record. + */ + if (!otdata || otdata->pid != owner_pid) { + struct owner_tracing_data first = { + .pid = owner_pid, + .timestamp = pelem->timestamp, + .count = 1, + .stack_id = id, + }; + bpf_map_update_elem(&owner_data, &pelem->lock, &first, BPF_ANY); + } + /* Contention is ongoing and new waiter joins */ + else { + __sync_fetch_and_add(&otdata->count, 1); + + /* + * The owner is the same, but stacktrace might be changed. In this case we + * store/update `owner_stat` based on current owner stack id. + */ + if (id != otdata->stack_id) { + update_owner_stat(id, pelem->timestamp - otdata->timestamp, + pelem->flags); + + otdata->timestamp = pelem->timestamp; + otdata->stack_id = id; + } + } +skip_owner: pelem->stack_id = bpf_get_stackid(ctx, &stacks, BPF_F_FAST_STACK_CMP | stack_skip); if (pelem->stack_id < 0) @@ -473,6 +597,7 @@ int contention_end(u64 *ctx) struct tstamp_data *pelem; struct contention_key key = {}; struct contention_data *data; + __u64 timestamp; __u64 duration; bool need_delete = false; @@ -500,12 +625,88 @@ int contention_end(u64 *ctx) need_delete = true; } - duration = bpf_ktime_get_ns() - pelem->timestamp; + timestamp = bpf_ktime_get_ns(); + duration = timestamp - pelem->timestamp; if ((__s64)duration < 0) { __sync_fetch_and_add(&time_fail, 1); goto out; } + if (needs_callstack && lock_owner) { + struct owner_tracing_data *otdata = bpf_map_lookup_elem(&owner_data, &pelem->lock); + + if (!otdata) + goto skip_owner; + + /* Update `owner_stat` */ + update_owner_stat(otdata->stack_id, timestamp - otdata->timestamp, pelem->flags); + + /* No contention is occurring, delete `lock` entry in `owner_data` */ + if (otdata->count <= 1) + bpf_map_delete_elem(&owner_data, &pelem->lock); + /* + * Contention is still ongoing, with a new owner (current task). `owner_data` + * should be updated accordingly. + */ + else { + u32 i = 0; + s32 ret = (s32)ctx[1]; + u64 *buf; + + otdata->timestamp = timestamp; + __sync_fetch_and_add(&otdata->count, -1); + + buf = bpf_map_lookup_elem(&stack_buf, &i); + if (!buf) + goto skip_owner; + for (i = 0; i < (u32)max_stack; i++) + buf[i] = 0x0; + + /* + * `ret` has the return code of the lock function. + * If `ret` is negative, the current task terminates lock waiting without + * acquiring it. Owner is not changed, but we still need to update the owner + * stack. + */ + if (ret < 0) { + s32 id = 0; + struct task_struct *task; + + if (!bpf_task_from_pid) + goto skip_owner; + + task = bpf_task_from_pid(otdata->pid); + if (!task) + goto skip_owner; + + bpf_get_task_stack(task, buf, + max_stack * sizeof(unsigned long), 0); + bpf_task_release(task); + + id = get_owner_stack_id(buf); + + /* + * If owner stack is changed, update owner stack id for this lock. + */ + if (id != otdata->stack_id) + otdata->stack_id = id; + } + /* + * Otherwise, update tracing data with the current task, which is the new + * owner. + */ + else { + otdata->pid = pid; + /* + * We don't want to retrieve callstack here, since it is where the + * current task acquires the lock and provides no additional + * information. We simply assign -1 to invalidate it. + */ + otdata->stack_id = -1; + } + } + } +skip_owner: switch (aggr_mode) { case LOCK_AGGR_CALLER: key.stack_id = pelem->stack_id; @@ -589,14 +790,7 @@ int contention_end(u64 *ctx) } found: - __sync_fetch_and_add(&data->total_time, duration); - __sync_fetch_and_add(&data->count, 1); - - /* FIXME: need atomic operations */ - if (data->max_time < duration) - data->max_time = duration; - if (data->min_time > duration) - data->min_time = duration; + update_contention_data(data, duration, 1); out: pelem->lock = 0; From patchwork Thu Feb 27 00:28:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chun-Tse Shao X-Patchwork-Id: 13993365 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5488B4A24 for ; Thu, 27 Feb 2025 00:36:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740616595; cv=none; b=HVgg50exaDkVQLkZFjyeSAocnTqwhR2KcOC+Rkf9QXQeVcG6RG91s4T6artGWkwL7FOX7c6o4iZvEGAFPbCsL8x6iCLIvblnpZHPI9Pe8IQQR1ZXJzZXxHmzydt+4Hbl6KPrm2U97zNJsYQBljqFnaeh3qPly8sdOe393TR3GAA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740616595; c=relaxed/simple; bh=SLA+bDmdTB6ULu5zbTamkrhePwVhtRIJqJ5avVgZmLw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=prNrd5do6YvMm9T41YeaZEAEMpkOUN9vJmFZMHmfNTwMnDYnQtoFcPtR30zAPn2XzzuYomrMTnsF1H4svPLnUk/xsCPFy2/dbFJxb79bKNhiB4SdDxyBvnTvIfSQwP2LU2l41yMdL2E3+t1RvXyMiWFacs3dE0YaMXCD7mSC1cM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ctshao.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ylbvyIdG; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ctshao.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ylbvyIdG" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fc0bc05b36so1243277a91.3 for ; Wed, 26 Feb 2025 16:36:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740616594; x=1741221394; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6PFYTBhXyGigxmAagzwCn1stPWOoCBAs7wU0xTcNHU0=; b=ylbvyIdGEzMQfhCTJaVvnPIofmXMkkkNaPMfrtkS+TO57Ig/2Ii+/nPOeYyxfpePbQ TFh0kEpmzy/YcX/6MNVtTKEt2PeTs2n4ugh6dIkL2v4m9Zp/90bjY27w7jsnsmuBYvrL 2hXzL1v9FOUTQ86sqFNtFubE06HC1MW+qYT5GgYTrQgYNPny/E/KV3FZx0uXVlL9+5wh XUNdacbAJVEfy+tZ8MP3LR1utLtbUlVLhrYnZAdfiRew0JzZFX8nB7vkxPE9tgybQ0Ov LluooylI3CZXunXLbdWnYba6DilBQporz3nwYbJLG0DyiVbc9VEG9d/6/I24KFHQ7D1O Awnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740616594; x=1741221394; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6PFYTBhXyGigxmAagzwCn1stPWOoCBAs7wU0xTcNHU0=; b=bnnjzN39gxBtRkXyANPy8dFNWUhHPMOXkyTY+FEtvvnyRepemvej+1LS47E9nl4D4d 0RbLpqeMDLhP93aKeatazDkBXoyHn+u8gLt1sF77VE6sJgbHzWmIF2gaiOcNcS87YdWe WfHUoHspqTgSd+oEH362yJWCtO0BeEhHw2dhjoaKOWobiE7rLdvgmh+J1NH5+RJlxKsL sUvCRd0JPoOikNVQEfXo0gJJfG+He/l5jMdc+5cUgCrhgdB9DyUxhEPcP8f8xLQAABcu QnB7Qu4B9jU6bOC04wFe1anmlateGnam7/8Uc2MX9OeCsFXzxnH/hm0FILJOt29WcETw ZB6w== X-Forwarded-Encrypted: i=1; AJvYcCVHeWOpP+wvYCt2855aYS4nprWlpvYKAwVtxVxwa3XbweDZNAYVbsbgpreO6pMOXpQ+RCA=@vger.kernel.org X-Gm-Message-State: AOJu0YyUZpoTviGA5SmVxV6qBCa7rin77AYAwDUHsBJILvjWHVuzMHev f+xXUYntA+WMFP+Yxjbx1u5Q6vblEDZCIc1XVdHEMZidwTEu71T4Pees38EcNcApYintwNV2dKs f7g== X-Google-Smtp-Source: AGHT+IHxikcWC0rOad7hFjtNrnAgUU6gfBjwzL5p/aLBMlY833GPCude+7dcuG9JSoMILugl97UxdWTlb5o= X-Received: from pjboi16.prod.google.com ([2002:a17:90b:3a10:b0:2fc:1eb0:5743]) (user=ctshao job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:268d:b0:2ee:59af:a432 with SMTP id 98e67ed59e1d1-2fce874088emr34694684a91.31.1740616593750; Wed, 26 Feb 2025 16:36:33 -0800 (PST) Date: Wed, 26 Feb 2025 16:28:55 -0800 In-Reply-To: <20250227003359.732948-1-ctshao@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250227003359.732948-1-ctshao@google.com> X-Mailer: git-send-email 2.48.1.658.g4767266eb4-goog Message-ID: <20250227003359.732948-4-ctshao@google.com> Subject: [PATCH v8 3/4] perf lock: Make rb_tree helper functions generic From: Chun-Tse Shao To: linux-kernel@vger.kernel.org Cc: Chun-Tse Shao , peterz@infradead.org, mingo@redhat.com, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, nick.forrington@arm.com, linux-perf-users@vger.kernel.org, bpf@vger.kernel.org The rb_tree helper functions can be reused for parsing `owner_lock_stat` into rb tree for sorting. Signed-off-by: Chun-Tse Shao --- tools/perf/builtin-lock.c | 34 +++++++++++++++++++++++----------- 1 file changed, 23 insertions(+), 11 deletions(-) diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c index 5d405cd8e696..9bebc186286f 100644 --- a/tools/perf/builtin-lock.c +++ b/tools/perf/builtin-lock.c @@ -418,16 +418,13 @@ static void combine_lock_stats(struct lock_stat *st) rb_insert_color(&st->rb, &sorted); } -static void insert_to_result(struct lock_stat *st, - int (*bigger)(struct lock_stat *, struct lock_stat *)) +static void insert_to(struct rb_root *rr, struct lock_stat *st, + int (*bigger)(struct lock_stat *, struct lock_stat *)) { - struct rb_node **rb = &result.rb_node; + struct rb_node **rb = &rr->rb_node; struct rb_node *parent = NULL; struct lock_stat *p; - if (combine_locks && st->combined) - return; - while (*rb) { p = container_of(*rb, struct lock_stat, rb); parent = *rb; @@ -439,13 +436,21 @@ static void insert_to_result(struct lock_stat *st, } rb_link_node(&st->rb, parent, rb); - rb_insert_color(&st->rb, &result); + rb_insert_color(&st->rb, rr); } -/* returns left most element of result, and erase it */ -static struct lock_stat *pop_from_result(void) +static inline void insert_to_result(struct lock_stat *st, + int (*bigger)(struct lock_stat *, + struct lock_stat *)) +{ + if (combine_locks && st->combined) + return; + insert_to(&result, st, bigger); +} + +static inline struct lock_stat *pop_from(struct rb_root *rr) { - struct rb_node *node = result.rb_node; + struct rb_node *node = rr->rb_node; if (!node) return NULL; @@ -453,8 +458,15 @@ static struct lock_stat *pop_from_result(void) while (node->rb_left) node = node->rb_left; - rb_erase(node, &result); + rb_erase(node, rr); return container_of(node, struct lock_stat, rb); + +} + +/* returns left most element of result, and erase it */ +static struct lock_stat *pop_from_result(void) +{ + return pop_from(&result); } struct trace_lock_handler { From patchwork Thu Feb 27 00:28:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chun-Tse Shao X-Patchwork-Id: 13993366 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DEFE827004D for ; Thu, 27 Feb 2025 00:37:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740616643; cv=none; b=DhPMxQ3Nra9eRc5j82nfdSAcg+kGUCqfJ9k+on0NnzNr5XLajedL47hlscALcY+SkpTNqrJ7HIIqBmeAi4WFqk6YkY+XL488/JtMwUqLJUaY7pYjom6Q+pMbGCSZKWBSIGDI0cIGOSVUN/5f3sdg7gkc6PYRQ/LVvenFiiKOZIw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740616643; c=relaxed/simple; bh=ra0KxU6OZ2Kntw539Gq8DHkuYwWOUFGpZweLpDYUOnk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Vp1LVKcXFjsoE0oSrswWCcH6EX/N186i/rAe8AKLIhQl2J+RsDu5wYP6MvFG7UlIq7FzzlTUdd9ENati9XeFiIjE3wfaLQ0Ew0zU+c3rMDlhJJYJqYLiA6ncJslmW611ZEeT1YQjuyrCoRcscj+4z7yJ5mFxBJBlVPTaYpCAFIw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ctshao.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gj9ZwhK6; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ctshao.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gj9ZwhK6" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fc0bc05afdso934806a91.0 for ; Wed, 26 Feb 2025 16:37:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740616641; x=1741221441; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ySFre/DS9Ie6BpbqXRUL93PnxWJmHNbJVUPdTcYqivQ=; b=gj9ZwhK6cy7d5uq2CD1PKYQ1l5JIzsVXymr77b4AFmZiOvtMM5hydf+UOeqH99Kaed FlZ5o8ybEvs0Tan6r3iz9q0Ddaj3GK6n2KtJl/5dRG5PFBw2boKuij5BWIKkQNNHHdES jgbxFKydik14S3khEGKEsO3+o5/zTfgv72miT9oz+3nRelxLJN/JkTu5J70CHySei46B XJPYwgnscFy8B/jdQNcC8QsU328xEatQ1fOBeBIiA+fUZwGrzUn163p3tVCubPwYSTnI VG0KgoZJ/jpTLf0C90UGN98BBrl63axNy7B/KyV81UZTpnfOkdHQvn5IErSyYh6RXAk8 XSiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740616641; x=1741221441; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ySFre/DS9Ie6BpbqXRUL93PnxWJmHNbJVUPdTcYqivQ=; b=mw04YQVU++TpkLILI46/+C5/OjsA7VKYEnYoy7jbixeoJlfxk6zqqGrRIfapHfB2U8 OInIAxC9V7HTLPmR4DhHJGcUssAeA4AWWy/UJJFSpXGSWB0j0UgwOfITdyT18xeM5SHw lgK3Wjyy/0yWQkVA18bB2WhaFZ5jI4O+6Asv5HERgafu5gHtJX1hT+mFqVtxAzsOWhgL BepI58POxxBMlxki3a18CKXJaPEpwdIYNW8eGN8Q/pfq5qVmn+59D1Pn4zlhdl8CdW93 PVzBb8ZTcE0CWDB2EZxA/QHZUeYUQC1LiAfOzTZZHHxxqfSWxBey/9m0C9FPJLcUQEhB Jh2w== X-Forwarded-Encrypted: i=1; AJvYcCX+J8fNmg1UQShSBR/kN0uPPgOQAVOSwpJopDW/5syCZg3pYMcrLdnPiZcERqPiuyh09Dc=@vger.kernel.org X-Gm-Message-State: AOJu0Yzw4ziG4b167FwxBHMcAb7O63hemnEnE4MbmzvwseQmhw0MyQ3i PHKMi97Ehaf6wbXM7aOOmfC4lEGJE02wpWzBdQCLmJk/MhrcbhVUTzevU1PR2cnHClDclUr0y3F NUg== X-Google-Smtp-Source: AGHT+IF/GB2SXwZz3UpivVRMccErkd62rEc6mCFXFCH/iGAfZUTC6MXPC2bQf8nMbVhWRK4P5ZBkHEd5nmg= X-Received: from pfbft2.prod.google.com ([2002:a05:6a00:81c2:b0:730:8970:1f9c]) (user=ctshao job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:3254:b0:734:9157:7456 with SMTP id d2e1a72fcca58-734915776d1mr4869228b3a.19.1740616641195; Wed, 26 Feb 2025 16:37:21 -0800 (PST) Date: Wed, 26 Feb 2025 16:28:56 -0800 In-Reply-To: <20250227003359.732948-1-ctshao@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250227003359.732948-1-ctshao@google.com> X-Mailer: git-send-email 2.48.1.658.g4767266eb4-goog Message-ID: <20250227003359.732948-5-ctshao@google.com> Subject: [PATCH v8 4/4] perf lock: Report owner stack in usermode From: Chun-Tse Shao To: linux-kernel@vger.kernel.org Cc: Chun-Tse Shao , peterz@infradead.org, mingo@redhat.com, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, nick.forrington@arm.com, linux-perf-users@vger.kernel.org, bpf@vger.kernel.org This patch parses `owner_lock_stat` into a RB tree, enabling ordered reporting of owner lock statistics with stack traces. It also updates the documentation for the `-o` option in contention mode, decouples `-o` from `-t`, and issues a warning to inform users about the new behavior of `-ov`. Example output: $ sudo ~/linux/tools/perf/perf lock con -abvo -Y mutex-spin -E3 perf bench sched pipe ... contended total wait max wait avg wait type caller 171 1.55 ms 20.26 us 9.06 us mutex pipe_read+0x57 0xffffffffac6318e7 pipe_read+0x57 0xffffffffac623862 vfs_read+0x332 0xffffffffac62434b ksys_read+0xbb 0xfffffffface604b2 do_syscall_64+0x82 0xffffffffad00012f entry_SYSCALL_64_after_hwframe+0x76 36 193.71 us 15.27 us 5.38 us mutex pipe_write+0x50 0xffffffffac631ee0 pipe_write+0x50 0xffffffffac6241db vfs_write+0x3bb 0xffffffffac6244ab ksys_write+0xbb 0xfffffffface604b2 do_syscall_64+0x82 0xffffffffad00012f entry_SYSCALL_64_after_hwframe+0x76 4 51.22 us 16.47 us 12.80 us mutex do_epoll_wait+0x24d 0xffffffffac691f0d do_epoll_wait+0x24d 0xffffffffac69249b do_epoll_pwait.part.0+0xb 0xffffffffac693ba5 __x64_sys_epoll_pwait+0x95 0xfffffffface604b2 do_syscall_64+0x82 0xffffffffad00012f entry_SYSCALL_64_after_hwframe+0x76 === owner stack trace === 3 31.24 us 15.27 us 10.41 us mutex pipe_read+0x348 0xffffffffac631bd8 pipe_read+0x348 0xffffffffac623862 vfs_read+0x332 0xffffffffac62434b ksys_read+0xbb 0xfffffffface604b2 do_syscall_64+0x82 0xffffffffad00012f entry_SYSCALL_64_after_hwframe+0x76 ... Signed-off-by: Chun-Tse Shao --- tools/perf/Documentation/perf-lock.txt | 5 +- tools/perf/builtin-lock.c | 22 +++++++- tools/perf/util/bpf_lock_contention.c | 71 +++++++++++++++++++++++--- tools/perf/util/lock-contention.h | 7 +++ 4 files changed, 94 insertions(+), 11 deletions(-) diff --git a/tools/perf/Documentation/perf-lock.txt b/tools/perf/Documentation/perf-lock.txt index d3793054f7d3..859dc11a7372 100644 --- a/tools/perf/Documentation/perf-lock.txt +++ b/tools/perf/Documentation/perf-lock.txt @@ -179,8 +179,9 @@ CONTENTION OPTIONS -o:: --lock-owner:: - Show lock contention stat by owners. Implies --threads and - requires --use-bpf. + Show lock contention stat by owners. This option can be combined with -t, + which shows owner's per thread lock stats, or -v, which shows owner's + stacktrace. Requires --use-bpf. -Y:: --type-filter=:: diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c index 9bebc186286f..05e7bc30488a 100644 --- a/tools/perf/builtin-lock.c +++ b/tools/perf/builtin-lock.c @@ -1817,6 +1817,22 @@ static void print_contention_result(struct lock_contention *con) break; } + if (con->owner && con->save_callstack && verbose > 0) { + struct rb_root root = RB_ROOT; + + if (symbol_conf.field_sep) + fprintf(lock_output, "# owner stack trace:\n"); + else + fprintf(lock_output, "\n=== owner stack trace ===\n\n"); + while ((st = pop_owner_stack_trace(con))) + insert_to(&root, st, compare); + + while ((st = pop_from(&root))) { + print_lock_stat(con, st); + free(st); + } + } + if (print_nr_entries) { /* update the total/bad stats */ while ((st = pop_from_result())) { @@ -1962,8 +1978,10 @@ static int check_lock_contention_options(const struct option *options, } } - if (show_lock_owner) - show_thread_stats = true; + if (show_lock_owner && !show_thread_stats) { + pr_warning("Now -o try to show owner's callstack instead of pid and comm.\n"); + pr_warning("Please use -t option too to keep the old behavior.\n"); + } return 0; } diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c index 76542b86e83f..5af8f6d1bc95 100644 --- a/tools/perf/util/bpf_lock_contention.c +++ b/tools/perf/util/bpf_lock_contention.c @@ -460,7 +460,6 @@ static const char *lock_contention_get_name(struct lock_contention *con, { int idx = 0; u64 addr; - const char *name = ""; static char name_buf[KSYM_NAME_LEN]; struct symbol *sym; struct map *kmap; @@ -475,13 +474,14 @@ static const char *lock_contention_get_name(struct lock_contention *con, if (pid) { struct thread *t = machine__findnew_thread(machine, /*pid=*/-1, pid); - if (t == NULL) - return name; - if (!bpf_map_lookup_elem(task_fd, &pid, &task) && - thread__set_comm(t, task.comm, /*timestamp=*/0)) - name = task.comm; + if (t != NULL && + !bpf_map_lookup_elem(task_fd, &pid, &task) && + thread__set_comm(t, task.comm, /*timestamp=*/0)) { + snprintf(name_buf, sizeof(name_buf), "%s", task.comm); + return name_buf; + } } - return name; + return ""; } if (con->aggr_mode == LOCK_AGGR_ADDR) { @@ -549,6 +549,63 @@ static const char *lock_contention_get_name(struct lock_contention *con, return name_buf; } +struct lock_stat *pop_owner_stack_trace(struct lock_contention *con) +{ + int stacks_fd, stat_fd; + u64 *stack_trace = NULL; + s32 stack_id; + struct contention_key ckey = {}; + struct contention_data cdata = {}; + size_t stack_size = con->max_stack * sizeof(*stack_trace); + struct lock_stat *st = NULL; + + stacks_fd = bpf_map__fd(skel->maps.owner_stacks); + stat_fd = bpf_map__fd(skel->maps.owner_stat); + if (!stacks_fd || !stat_fd) + goto out_err; + + stack_trace = zalloc(stack_size); + if (stack_trace == NULL) + goto out_err; + + if (bpf_map_get_next_key(stacks_fd, NULL, stack_trace)) + goto out_err; + + bpf_map_lookup_elem(stacks_fd, stack_trace, &stack_id); + ckey.stack_id = stack_id; + bpf_map_lookup_elem(stat_fd, &ckey, &cdata); + + st = zalloc(sizeof(struct lock_stat)); + if (!st) + goto out_err; + + st->name = strdup(stack_trace[0] ? lock_contention_get_name(con, NULL, stack_trace, 0) : + "unknown"); + if (!st->name) + goto out_err; + + st->flags = cdata.flags; + st->nr_contended = cdata.count; + st->wait_time_total = cdata.total_time; + st->wait_time_max = cdata.max_time; + st->wait_time_min = cdata.min_time; + st->callstack = stack_trace; + + if (cdata.count) + st->avg_wait_time = cdata.total_time / cdata.count; + + bpf_map_delete_elem(stacks_fd, stack_trace); + bpf_map_delete_elem(stat_fd, &ckey); + + return st; + +out_err: + free(stack_trace); + free(st); + + return NULL; +} + int lock_contention_read(struct lock_contention *con) { int fd, stack, err = 0; diff --git a/tools/perf/util/lock-contention.h b/tools/perf/util/lock-contention.h index a09f7fe877df..1da779d75b5f 100644 --- a/tools/perf/util/lock-contention.h +++ b/tools/perf/util/lock-contention.h @@ -168,6 +168,8 @@ int lock_contention_stop(void); int lock_contention_read(struct lock_contention *con); int lock_contention_finish(struct lock_contention *con); +struct lock_stat *pop_owner_stack_trace(struct lock_contention *con); + #else /* !HAVE_BPF_SKEL */ static inline int lock_contention_prepare(struct lock_contention *con __maybe_unused) @@ -187,6 +189,11 @@ static inline int lock_contention_read(struct lock_contention *con __maybe_unuse return 0; } +struct lock_stat *pop_owner_stack_trace(struct lock_contention *con __maybe_unused) +{ + return NULL; +} + #endif /* HAVE_BPF_SKEL */ #endif /* PERF_LOCK_CONTENTION_H */