From patchwork Thu Apr 8 10:35:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12190567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 666B2C433B4 for ; Thu, 8 Apr 2021 10:36:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 39DCE61165 for ; Thu, 8 Apr 2021 10:36:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229803AbhDHKhD (ORCPT ); Thu, 8 Apr 2021 06:37:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230476AbhDHKhA (ORCPT ); Thu, 8 Apr 2021 06:37:00 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1FCD7C061762 for ; Thu, 8 Apr 2021 03:36:49 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id b127so976791qkf.19 for ; Thu, 08 Apr 2021 03:36:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9HmcB50z3wsmZOTGVRWLwJaFzCLoR3Qg3FfFsE4P02g=; b=H+7dHwPHa0xMDeiqh5Va22E6krT/uJ10fYZ02/ChV73A3DK7xbC9jbxwAvjpbOQG6J qzmEyHcrJFigxL+oHTrHh9d90rK53iEkt4HLk6179HDubM0sLVj8a9c3/fkM9+OQGXXW tGAsOuI2eIhiWwoGc0zC1yV47R6+vjauVw0UWKfWn22D1G2Tf2E5vV/TW3E1Xm8r890q 5eNdSQAZ9JD/7Qpqc+4LBOyTs2GrP3omn9DkhmzUr7XtO0Ihu4p6/92hlaftlDFr8V5o KueV93769BV1qg1DhQpM9o2b/r/p6SDHuHzPYps5oaKmVFrYeh+iFJk/VokAlRxisb0u y/jA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9HmcB50z3wsmZOTGVRWLwJaFzCLoR3Qg3FfFsE4P02g=; b=dZ7jWfuxUtpPD3vjSUJ2zQJUTE08VWrImVG0Jop9rcN9qT0mwltM3098HX7+O6KBRn phiFaC67q5XJOJhKklzHY2+KPDgZ3lHaO7oyGXGqjZx5/cKtrdIC+FAz+OXxzknkbaBU LvKc9MmBY7OZTLnW8p/CpJPzUbZH0fUq+7pkAFUDBU/72smgNSozfKHfhvwkMrH+b7GL leaVuOPV6EmgoXyFZOyMth1il3OWHnkeayGIMBvVFSocuCuPi2H2FG5ysWT1Z0bi3SiJ 1qlS5T3daUi9Yue/IGT3cMIG7etlTu99GrVMAgnhLp8qWmM7TxD8irghP1OJqdUIN4F1 JdDw== X-Gm-Message-State: AOAM5327HX2iEqFn5Blo6NOOrbu3yDBDSDDhuftSSP5e1h+u7d5fx9hN e93CFlCzPXVEDX3CbRsZif3qD5HTcw== X-Google-Smtp-Source: ABdhPJxBApGenT2ECjm/RODTM41CW81XWy22LE34hHaDYcpr4xwYKevp9Hu5TlUOiWy6p0BaOnspShOQUw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:9038:bbd3:4a12:abda]) (user=elver job=sendgmr) by 2002:a0c:f64e:: with SMTP id s14mr7988656qvm.15.1617878208239; Thu, 08 Apr 2021 03:36:48 -0700 (PDT) Date: Thu, 8 Apr 2021 12:35:56 +0200 In-Reply-To: <20210408103605.1676875-1-elver@google.com> Message-Id: <20210408103605.1676875-2-elver@google.com> Mime-Version: 1.0 References: <20210408103605.1676875-1-elver@google.com> X-Mailer: git-send-email 2.31.0.208.g409f899ff0-goog Subject: [PATCH v4 01/10] perf: Rework perf_event_exit_event() From: Marco Elver To: elver@google.com, peterz@infradead.org, alexander.shishkin@linux.intel.com, acme@kernel.org, mingo@redhat.com, jolsa@redhat.com, mark.rutland@arm.com, namhyung@kernel.org, tglx@linutronix.de Cc: glider@google.com, viro@zeniv.linux.org.uk, arnd@arndb.de, christian@brauner.io, dvyukov@google.com, jannh@google.com, axboe@kernel.dk, mascasa@google.com, pcc@google.com, irogers@google.com, oleg@redhat.com, kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-kselftest@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Peter Zijlstra Make perf_event_exit_event() more robust, such that we can use it from other contexts. Specifically the up and coming remove_on_exec. For this to work we need to address a few issues. Remove_on_exec will not destroy the entire context, so we cannot rely on TASK_TOMBSTONE to disable event_function_call() and we thus have to use perf_remove_from_context(). When using perf_remove_from_context(), there's two races to consider. The first is against close(), where we can have concurrent tear-down of the event. The second is against child_list iteration, which should not find a half baked event. To address this, teach perf_remove_from_context() to special case !ctx->is_active and about DETACH_CHILD. Signed-off-by: Peter Zijlstra (Intel) [ elver@google.com: fix racing parent/child exit in sync_child_event(). ] Signed-off-by: Marco Elver --- v4: * Fix for parent and child racing to exit in sync_child_event(). v3: * New dependency for series: https://lkml.kernel.org/r/YFn/I3aKF+TOjGcl@hirez.programming.kicks-ass.net --- include/linux/perf_event.h | 1 + kernel/events/core.c | 142 +++++++++++++++++++++---------------- 2 files changed, 80 insertions(+), 63 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 3f7f89ea5e51..3d478abf411c 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -607,6 +607,7 @@ struct swevent_hlist { #define PERF_ATTACH_TASK_DATA 0x08 #define PERF_ATTACH_ITRACE 0x10 #define PERF_ATTACH_SCHED_CB 0x20 +#define PERF_ATTACH_CHILD 0x40 struct perf_cgroup; struct perf_buffer; diff --git a/kernel/events/core.c b/kernel/events/core.c index 03db40f6cba9..e77294c7e654 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2204,6 +2204,26 @@ static void perf_group_detach(struct perf_event *event) perf_event__header_size(leader); } +static void sync_child_event(struct perf_event *child_event); + +static void perf_child_detach(struct perf_event *event) +{ + struct perf_event *parent_event = event->parent; + + if (!(event->attach_state & PERF_ATTACH_CHILD)) + return; + + event->attach_state &= ~PERF_ATTACH_CHILD; + + if (WARN_ON_ONCE(!parent_event)) + return; + + lockdep_assert_held(&parent_event->child_mutex); + + sync_child_event(event); + list_del_init(&event->child_list); +} + static bool is_orphaned_event(struct perf_event *event) { return event->state == PERF_EVENT_STATE_DEAD; @@ -2311,6 +2331,7 @@ group_sched_out(struct perf_event *group_event, } #define DETACH_GROUP 0x01UL +#define DETACH_CHILD 0x02UL /* * Cross CPU call to remove a performance event @@ -2334,6 +2355,8 @@ __perf_remove_from_context(struct perf_event *event, event_sched_out(event, cpuctx, ctx); if (flags & DETACH_GROUP) perf_group_detach(event); + if (flags & DETACH_CHILD) + perf_child_detach(event); list_del_event(event, ctx); if (!ctx->nr_events && ctx->is_active) { @@ -2362,25 +2385,21 @@ static void perf_remove_from_context(struct perf_event *event, unsigned long fla lockdep_assert_held(&ctx->mutex); - event_function_call(event, __perf_remove_from_context, (void *)flags); - /* - * The above event_function_call() can NO-OP when it hits - * TASK_TOMBSTONE. In that case we must already have been detached - * from the context (by perf_event_exit_event()) but the grouping - * might still be in-tact. + * Because of perf_event_exit_task(), perf_remove_from_context() ought + * to work in the face of TASK_TOMBSTONE, unlike every other + * event_function_call() user. */ - WARN_ON_ONCE(event->attach_state & PERF_ATTACH_CONTEXT); - if ((flags & DETACH_GROUP) && - (event->attach_state & PERF_ATTACH_GROUP)) { - /* - * Since in that case we cannot possibly be scheduled, simply - * detach now. - */ - raw_spin_lock_irq(&ctx->lock); - perf_group_detach(event); + raw_spin_lock_irq(&ctx->lock); + if (!ctx->is_active) { + __perf_remove_from_context(event, __get_cpu_context(ctx), + ctx, (void *)flags); raw_spin_unlock_irq(&ctx->lock); + return; } + raw_spin_unlock_irq(&ctx->lock); + + event_function_call(event, __perf_remove_from_context, (void *)flags); } /* @@ -12373,14 +12392,17 @@ void perf_pmu_migrate_context(struct pmu *pmu, int src_cpu, int dst_cpu) } EXPORT_SYMBOL_GPL(perf_pmu_migrate_context); -static void sync_child_event(struct perf_event *child_event, - struct task_struct *child) +static void sync_child_event(struct perf_event *child_event) { struct perf_event *parent_event = child_event->parent; u64 child_val; - if (child_event->attr.inherit_stat) - perf_event_read_event(child_event, child); + if (child_event->attr.inherit_stat) { + struct task_struct *task = child_event->ctx->task; + + if (task && task != TASK_TOMBSTONE) + perf_event_read_event(child_event, task); + } child_val = perf_event_count(child_event); @@ -12395,60 +12417,53 @@ static void sync_child_event(struct perf_event *child_event, } static void -perf_event_exit_event(struct perf_event *child_event, - struct perf_event_context *child_ctx, - struct task_struct *child) +perf_event_exit_event(struct perf_event *event, struct perf_event_context *ctx) { - struct perf_event *parent_event = child_event->parent; + struct perf_event *parent_event = event->parent; + unsigned long detach_flags = 0; - /* - * Do not destroy the 'original' grouping; because of the context - * switch optimization the original events could've ended up in a - * random child task. - * - * If we were to destroy the original group, all group related - * operations would cease to function properly after this random - * child dies. - * - * Do destroy all inherited groups, we don't care about those - * and being thorough is better. - */ - raw_spin_lock_irq(&child_ctx->lock); - WARN_ON_ONCE(child_ctx->is_active); + if (parent_event) { + /* + * Do not destroy the 'original' grouping; because of the + * context switch optimization the original events could've + * ended up in a random child task. + * + * If we were to destroy the original group, all group related + * operations would cease to function properly after this + * random child dies. + * + * Do destroy all inherited groups, we don't care about those + * and being thorough is better. + */ + detach_flags = DETACH_GROUP | DETACH_CHILD; + mutex_lock(&parent_event->child_mutex); + } - if (parent_event) - perf_group_detach(child_event); - list_del_event(child_event, child_ctx); - perf_event_set_state(child_event, PERF_EVENT_STATE_EXIT); /* is_event_hup() */ - raw_spin_unlock_irq(&child_ctx->lock); + perf_remove_from_context(event, detach_flags); + + raw_spin_lock_irq(&ctx->lock); + if (event->state > PERF_EVENT_STATE_EXIT) + perf_event_set_state(event, PERF_EVENT_STATE_EXIT); + raw_spin_unlock_irq(&ctx->lock); /* - * Parent events are governed by their filedesc, retain them. + * Child events can be freed. */ - if (!parent_event) { - perf_event_wakeup(child_event); + if (parent_event) { + mutex_unlock(&parent_event->child_mutex); + /* + * Kick perf_poll() for is_event_hup(); + */ + perf_event_wakeup(parent_event); + free_event(event); + put_event(parent_event); return; } - /* - * Child events can be cleaned up. - */ - - sync_child_event(child_event, child); /* - * Remove this event from the parent's list - */ - WARN_ON_ONCE(parent_event->ctx->parent_ctx); - mutex_lock(&parent_event->child_mutex); - list_del_init(&child_event->child_list); - mutex_unlock(&parent_event->child_mutex); - - /* - * Kick perf_poll() for is_event_hup(). + * Parent events are governed by their filedesc, retain them. */ - perf_event_wakeup(parent_event); - free_event(child_event); - put_event(parent_event); + perf_event_wakeup(event); } static void perf_event_exit_task_context(struct task_struct *child, int ctxn) @@ -12505,7 +12520,7 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn) perf_event_task(child, child_ctx, 0); list_for_each_entry_safe(child_event, next, &child_ctx->event_list, event_entry) - perf_event_exit_event(child_event, child_ctx, child); + perf_event_exit_event(child_event, child_ctx); mutex_unlock(&child_ctx->mutex); @@ -12765,6 +12780,7 @@ inherit_event(struct perf_event *parent_event, */ raw_spin_lock_irqsave(&child_ctx->lock, flags); add_event_to_ctx(child_event, child_ctx); + child_event->attach_state |= PERF_ATTACH_CHILD; raw_spin_unlock_irqrestore(&child_ctx->lock, flags); /*