From patchwork Tue Jul 19 22:20:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 12923148 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7EAAC433EF for ; Tue, 19 Jul 2022 22:20:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237381AbiGSWUN convert rfc822-to-8bit (ORCPT ); Tue, 19 Jul 2022 18:20:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235617AbiGSWUM (ORCPT ); Tue, 19 Jul 2022 18:20:12 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 464346431; Tue, 19 Jul 2022 15:20:10 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 09998B81D95; Tue, 19 Jul 2022 22:20:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 79915C341C6; Tue, 19 Jul 2022 22:20:06 +0000 (UTC) Date: Tue, 19 Jul 2022 18:20:04 -0400 From: Steven Rostedt To: LKML Cc: Ingo Molnar , Andrew Morton , Arun Easi , Daniel Wagner , Nilesh Javali , , , , , , Greg Kroah-Hartman Subject: [PATCH] tracing: Use a copy of the va_list for __assign_vstr() Message-ID: <20220719182004.21daa83e@gandalf.local.home> X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: "Steven Rostedt (Google)" If an instance of tracing enables the same trace event as another instance, or the top level instance, or even perf, then the va_list passed into some tracepoints can be used more than once. As va_list can only be traversed once, this can cause issues: # cat /sys/kernel/tracing/instances/qla2xxx/trace cat-56106 [012] ..... 2419873.470098: ql_dbg_log: qla2xxx [0000:05:00.0]-1054:14: Entered (null). cat-56106 [012] ..... 2419873.470101: ql_dbg_log: qla2xxx [0000:05:00.0]-1000:14: Entered ×+<96>²Ü<98>^H. cat-56106 [012] ..... 2419873.470102: ql_dbg_log: qla2xxx [0000:05:00.0]-1006:14: Prepare to issue mbox cmd=0xde589000. # cat /sys/kernel/tracing/trace cat-56106 [012] ..... 2419873.470097: ql_dbg_log: qla2xxx [0000:05:00.0]-1054:14: Entered qla2x00_get_firmware_state. cat-56106 [012] ..... 2419873.470100: ql_dbg_log: qla2xxx [0000:05:00.0]-1000:14: Entered qla2x00_mailbox_command. cat-56106 [012] ..... 2419873.470102: ql_dbg_log: qla2xxx [0000:05:00.0]-1006:14: Prepare to issue mbox cmd=0x69. The instance version is corrupted because the top level instance iterated the va_list first. Use va_copy() in the __assign_vstr() macro to make sure that each trace event for each use case gets a fresh va_list. Link: https://lore.kernel.org/all/259d53a5-958e-6508-4e45-74dba2821242@marvell.com/ Reported-by: Arun Easi Signed-off-by: Steven Rostedt (Google) --- This means that the __vstring/__assign_vstr() series, with this patch is actually a bug fix and not a clean up. These will probably need to go to stable after they hit Linus's tree. I'll still wait till the merge window as it's not far away and I'd like these to sit in linux-next for a bit too. include/trace/stages/stage6_event_callback.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/include/trace/stages/stage6_event_callback.h b/include/trace/stages/stage6_event_callback.h index 0f51f6b3ab70..3c554a585320 100644 --- a/include/trace/stages/stage6_event_callback.h +++ b/include/trace/stages/stage6_event_callback.h @@ -40,7 +40,12 @@ #undef __assign_vstr #define __assign_vstr(dst, fmt, va) \ - vsnprintf(__get_str(dst), TRACE_EVENT_STR_MAX, fmt, *(va)) + do { \ + va_list __cp_va; \ + va_copy(__cp_va, *(va)); \ + vsnprintf(__get_str(dst), TRACE_EVENT_STR_MAX, fmt, __cp_va); \ + va_end(__cp_va); \ + } while (0) #undef __bitmask #define __bitmask(item, nr_bits) __dynamic_array(unsigned long, item, -1)