Message ID | 20210711104105.505728-12-leo.yan@linaro.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | perf: Refine barriers for AUX ring buffer | expand |
On Sun, Jul 11, 2021 at 06:41:05PM +0800, Leo Yan wrote: > When perf runs in compat mode (kernel in 64-bit mode and the perf is in > 32-bit mode), the 64-bit value atomicity in the user space cannot be > assured, E.g. on some architectures, the 64-bit value accessing is split > into two instructions, one is for the low 32-bit word accessing and > another is for the high 32-bit word. Does this apply to 32-bit ARM code on aarch64? I would not have thought it would, as the structure member is a __u64 and compat_auxtrace_mmap__read_head() doesn't seem to be marking anything as packed, so the compiler _should_ be able to use a LDRD instruction to load the value. Is this a problem noticed on non-ARM architectures? Thanks.
On 11/07/21 1:41 pm, Leo Yan wrote: > When perf runs in compat mode (kernel in 64-bit mode and the perf is in > 32-bit mode), the 64-bit value atomicity in the user space cannot be > assured, E.g. on some architectures, the 64-bit value accessing is split > into two instructions, one is for the low 32-bit word accessing and > another is for the high 32-bit word. > > This patch introduces two functions compat_auxtrace_mmap__read_head() > and compat_auxtrace_mmap__write_tail(), as their naming indicates, when > perf tool works in compat mode, it uses these two functions to access > the AUX head and tail. These two functions can allow the perf tool to > work properly in certain conditions, e.g. when perf tool works in > snapshot mode with only using AUX head pointer, or perf tool uses the > AUX buffer and the incremented tail is not bigger than 4GB. > > When perf tool cannot handle the case when the AUX tail is bigger than > 4GB, the function compat_auxtrace_mmap__write_tail() returns -1 and > tells the caller to bail out for the error. > > Suggested-by: Adrian Hunter <adrian.hunter@intel.com> > Signed-off-by: Leo Yan <leo.yan@linaro.org> > --- > tools/perf/util/auxtrace.c | 9 ++-- > tools/perf/util/auxtrace.h | 94 +++++++++++++++++++++++++++++++++++++- > 2 files changed, 98 insertions(+), 5 deletions(-) > > diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c > index 6a63be8b2430..d6fc250fbf97 100644 > --- a/tools/perf/util/auxtrace.c > +++ b/tools/perf/util/auxtrace.c > @@ -1766,10 +1766,13 @@ static int __auxtrace_mmap__read(struct mmap *map, > mm->prev = head; > > if (!snapshot) { > - auxtrace_mmap__write_tail(mm, head); > - if (itr->read_finish) { > - int err; > + int err; > > + err = auxtrace_mmap__write_tail(mm, head); > + if (err < 0) > + return err; > + > + if (itr->read_finish) { > err = itr->read_finish(itr, mm->idx); > if (err < 0) > return err; > diff --git a/tools/perf/util/auxtrace.h b/tools/perf/util/auxtrace.h > index d68a5e80b217..66de7b6e65ec 100644 > --- a/tools/perf/util/auxtrace.h > +++ b/tools/perf/util/auxtrace.h > @@ -18,6 +18,8 @@ > #include <asm/bitsperlong.h> > #include <asm/barrier.h> > > +#include "env.h" > + > union perf_event; > struct perf_session; > struct evlist; > @@ -440,23 +442,111 @@ struct auxtrace_cache; > > #ifdef HAVE_AUXTRACE_SUPPORT > > +/* > + * In the compat mode kernel runs in 64-bit and perf tool runs in 32-bit mode, > + * 32-bit perf tool cannot access 64-bit value atomically, which might lead to > + * the issues caused by the below sequence on multiple CPUs: when perf tool > + * accesses either the load operation or the store operation for 64-bit value, > + * on some architectures the operation is divided into two instructions, one > + * is for accessing the low 32-bit value and another is for the high 32-bit; > + * thus these two user operations can give the kernel chances to access the > + * 64-bit value, and thus leads to the unexpected load values. > + * > + * kernel (64-bit) user (32-bit) > + * > + * if (LOAD ->aux_tail) { --, LOAD ->aux_head_lo > + * STORE $aux_data | ,---> > + * FLUSH $aux_data | | LOAD ->aux_head_hi > + * STORE ->aux_head --|-------` smp_rmb() > + * } | LOAD $data > + * | smp_mb() > + * | STORE ->aux_tail_lo > + * `-----------> > + * STORE ->aux_tail_hi > + * > + * For this reason, it's impossible for the perf tool to work correctly when > + * the AUX head or tail is bigger than 4GB (more than 32 bits length); and we > + * can not simply limit the AUX ring buffer to less than 4GB, the reason is > + * the pointers can be increased monotonically (e.g in snapshot mode), whatever At least for Intel PT, in snapshot mode, the head is always an offset into the buffer, so never more than 4GB for a 32-bit perf tool. So maybe leave out "(e.g in snapshot mode)" > + * the buffer size it is, at the end the head and tail can be bigger than 4GB > + * and carry out to the high 32-bit. > + * > + * To mitigate the issues and improve the user experience, we can allow the > + * perf tool working in certain conditions and bail out with error if detect > + * any overflow cannot be handled. > + * > + * For reading the AUX head, it reads out the values for three times, and > + * compares the high 4 bytes of the values between the first time and the last > + * time, if there has no change for high 4 bytes injected by the kernel during > + * the user reading sequence, it's safe for use the second value. > + * > + * When update the AUX tail and detects any carrying in the high 32 bits, it > + * means there have two store operations in user space and it cannot promise > + * the atomicity for 64-bit write, so return '-1' in this case to tell the > + * caller an overflow error has happened. > + */ > +static inline u64 compat_auxtrace_mmap__read_head(struct auxtrace_mmap *mm) > +{ > + struct perf_event_mmap_page *pc = mm->userpg; > + u64 first, second, last; > + u64 mask = (u64)(UINT32_MAX) << 32; > + > + do { > + first = READ_ONCE(pc->aux_head); > + /* Ensure all reads are done after we read the head */ > + smp_rmb(); > + second = READ_ONCE(pc->aux_head); > + /* Ensure all reads are done after we read the head */ > + smp_rmb(); > + last = READ_ONCE(pc->aux_head); > + } while ((first & mask) != (last & mask)); > + > + return second; > +} > + > +static inline int compat_auxtrace_mmap__write_tail(struct auxtrace_mmap *mm, > + u64 tail) > +{ > + struct perf_event_mmap_page *pc = mm->userpg; > + u64 mask = (u64)(UINT32_MAX) << 32; > + > + if (tail & mask) > + return -1; > + > + /* Ensure all reads are done before we write the tail out */ > + smp_mb(); > + WRITE_ONCE(pc->aux_tail, tail); > + return 0; > +} > + > static inline u64 auxtrace_mmap__read_head(struct auxtrace_mmap *mm) > { > struct perf_event_mmap_page *pc = mm->userpg; > - u64 head = READ_ONCE(pc->aux_head); > + u64 head; > + > +#if BITS_PER_LONG == 32 > + if (kernel_is_64_bit) > + return compat_auxtrace_mmap__read_head(mm); > +#endif > + head = READ_ONCE(pc->aux_head); > > /* Ensure all reads are done after we read the head */ > smp_rmb(); > return head; > } > > -static inline void auxtrace_mmap__write_tail(struct auxtrace_mmap *mm, u64 tail) > +static inline int auxtrace_mmap__write_tail(struct auxtrace_mmap *mm, u64 tail) > { > struct perf_event_mmap_page *pc = mm->userpg; > > +#if BITS_PER_LONG == 32 > + if (kernel_is_64_bit) > + return compat_auxtrace_mmap__write_tail(mm, tail); > +#endif > /* Ensure all reads are done before we write the tail out */ > smp_mb(); > WRITE_ONCE(pc->aux_tail, tail); > + return 0; > } > > int auxtrace_mmap__mmap(struct auxtrace_mmap *mm, >
Hi Russell, On Mon, Jul 12, 2021 at 03:44:11PM +0100, Russell King (Oracle) wrote: > On Sun, Jul 11, 2021 at 06:41:05PM +0800, Leo Yan wrote: > > When perf runs in compat mode (kernel in 64-bit mode and the perf is in > > 32-bit mode), the 64-bit value atomicity in the user space cannot be > > assured, E.g. on some architectures, the 64-bit value accessing is split > > into two instructions, one is for the low 32-bit word accessing and > > another is for the high 32-bit word. > > Does this apply to 32-bit ARM code on aarch64? I would not have thought > it would, as the structure member is a __u64 and > compat_auxtrace_mmap__read_head() doesn't seem to be marking anything > as packed, so the compiler _should_ be able to use a LDRD instruction > to load the value. I think essentially your question is relevant to the memory model. For 32-bit Arm application on aarch64, in the Armv8 architecture reference manual ARM DDI 0487F.c, chapter "E2.2.1 Requirements for single-copy atomicity" describes: "LDM, LDC, LDRD, STM, STC, STRD, PUSH, POP, RFE, SRS, VLDM, VLDR, VSTM, and VSTR instructions are executed as a sequence of word-aligned word accesses. Each 32-bit word access is guaranteed to be single-copy atomic. The architecture does not require subsequences of two or more word accesses from the sequence to be single-copy atomic." So I think LDRD/STRD instruction cannot promise the atomicity for loading or storing two words in 32-bit Arm. And another thought is the functions compat_auxtrace_mmap__read_head() is a general function, I avoid to write it with any architecture specific instructions. > Is this a problem noticed on non-ARM architectures? No, actually we just concluded the potential issue based on the analysis for the weak memory model. Thanks, Leo
On Tue, Jul 13, 2021 at 10:07:03AM +0300, Adrian Hunter wrote: [...] > > +/* > > + * In the compat mode kernel runs in 64-bit and perf tool runs in 32-bit mode, > > + * 32-bit perf tool cannot access 64-bit value atomically, which might lead to > > + * the issues caused by the below sequence on multiple CPUs: when perf tool > > + * accesses either the load operation or the store operation for 64-bit value, > > + * on some architectures the operation is divided into two instructions, one > > + * is for accessing the low 32-bit value and another is for the high 32-bit; > > + * thus these two user operations can give the kernel chances to access the > > + * 64-bit value, and thus leads to the unexpected load values. > > + * > > + * kernel (64-bit) user (32-bit) > > + * > > + * if (LOAD ->aux_tail) { --, LOAD ->aux_head_lo > > + * STORE $aux_data | ,---> > > + * FLUSH $aux_data | | LOAD ->aux_head_hi > > + * STORE ->aux_head --|-------` smp_rmb() > > + * } | LOAD $data > > + * | smp_mb() > > + * | STORE ->aux_tail_lo > > + * `-----------> > > + * STORE ->aux_tail_hi > > + * > > + * For this reason, it's impossible for the perf tool to work correctly when > > + * the AUX head or tail is bigger than 4GB (more than 32 bits length); and we > > + * can not simply limit the AUX ring buffer to less than 4GB, the reason is > > + * the pointers can be increased monotonically (e.g in snapshot mode), whatever > > At least for Intel PT, in snapshot mode, the head is always an offset > into the buffer, so never more than 4GB for a 32-bit perf tool. So maybe > leave out "(e.g in snapshot mode)" Sure, will leave out "(e.g in snapshot mode)". Thanks, Leo
On Tue, Jul 13, 2021 at 11:46:02PM +0800, Leo Yan wrote: > Hi Russell, > > On Mon, Jul 12, 2021 at 03:44:11PM +0100, Russell King (Oracle) wrote: > > On Sun, Jul 11, 2021 at 06:41:05PM +0800, Leo Yan wrote: > > > When perf runs in compat mode (kernel in 64-bit mode and the perf is in > > > 32-bit mode), the 64-bit value atomicity in the user space cannot be > > > assured, E.g. on some architectures, the 64-bit value accessing is split > > > into two instructions, one is for the low 32-bit word accessing and > > > another is for the high 32-bit word. > > > > Does this apply to 32-bit ARM code on aarch64? I would not have thought > > it would, as the structure member is a __u64 and > > compat_auxtrace_mmap__read_head() doesn't seem to be marking anything > > as packed, so the compiler _should_ be able to use a LDRD instruction > > to load the value. > > I think essentially your question is relevant to the memory model. > For 32-bit Arm application on aarch64, in the Armv8 architecture > reference manual ARM DDI 0487F.c, chapter "E2.2.1 > Requirements for single-copy atomicity" describes: > > "LDM, LDC, LDRD, STM, STC, STRD, PUSH, POP, RFE, SRS, VLDM, VLDR, VSTM, > and VSTR instructions are executed as a sequence of word-aligned word > accesses. Each 32-bit word access is guaranteed to be single-copy > atomic. The architecture does not require subsequences of two or more > word accesses from the sequence to be single-copy atomic." ... which is an interesting statement for ARMv7 code. DDI0406C says similar but goes on to say: In an implementation that includes the Large Physical Address Extension, LDRD and STRD accesses to 64-bit aligned locations are 64-bit single-copy atomic as seen by translation table walks and accesses to translation tables. then states that LPAE page tables must be stored in memory that such page tables must be in memory that is capable of supporting 64-bit single-copy atomic accesses. In Linux, we assume all RAM that the kernel has access to can contain page tables. So by implication, all RAM that the kernel has access to and exposes to userspace must be 64-bit single-copy atomic (if not, we have a rather serious bug.) The remaining question is whether it would be sane for LDRD and STRD to be single-copy atomic to translation table walkers but not to other CPUs. Since Linux expects to be able to modify the page tables from any CPU in the system, this requirement must hold, otherwise it's going to be a really strangely designed system. Therefore, I put it that for Linux to operate correctly on 32-bit Arm CPUs with LPAE, LDRD and STRD must be 64-bit single-copy atomic inspite of what the architecture reference documentation may allow. Now, since we allow 32-bit ARM kernels to run under KVM on ARMv8, it would be pretty silly if this was broken on aarch64 - it would mean such a guest would have no way to atomically update the LPAE page tables. We know that's not true, since we can run 32-bit kernels and userspace just fine under aarch64. I'd be interested to hear what Catalin and Will have to say on this, but I suspect in practice, Arm systems that are running Linux with LPAE (ARMv7+LPAE, ARMv8) will implement LDRD and STRD with 64-bit single-copy atomic semantics.
On Tue, Jul 13, 2021 at 05:14:41PM +0100, Russell King wrote: > On Tue, Jul 13, 2021 at 11:46:02PM +0800, Leo Yan wrote: > > On Mon, Jul 12, 2021 at 03:44:11PM +0100, Russell King (Oracle) wrote: > > > On Sun, Jul 11, 2021 at 06:41:05PM +0800, Leo Yan wrote: > > > > When perf runs in compat mode (kernel in 64-bit mode and the perf is in > > > > 32-bit mode), the 64-bit value atomicity in the user space cannot be > > > > assured, E.g. on some architectures, the 64-bit value accessing is split > > > > into two instructions, one is for the low 32-bit word accessing and > > > > another is for the high 32-bit word. > > > > > > Does this apply to 32-bit ARM code on aarch64? I would not have thought > > > it would, as the structure member is a __u64 and > > > compat_auxtrace_mmap__read_head() doesn't seem to be marking anything > > > as packed, so the compiler _should_ be able to use a LDRD instruction > > > to load the value. > > > > I think essentially your question is relevant to the memory model. > > For 32-bit Arm application on aarch64, in the Armv8 architecture > > reference manual ARM DDI 0487F.c, chapter "E2.2.1 > > Requirements for single-copy atomicity" describes: > > > > "LDM, LDC, LDRD, STM, STC, STRD, PUSH, POP, RFE, SRS, VLDM, VLDR, VSTM, > > and VSTR instructions are executed as a sequence of word-aligned word > > accesses. Each 32-bit word access is guaranteed to be single-copy > > atomic. The architecture does not require subsequences of two or more > > word accesses from the sequence to be single-copy atomic." > > ... which is an interesting statement for ARMv7 code. DDI0406C says > similar but goes on to say: > > In an implementation that includes the Large Physical Address > Extension, LDRD and STRD accesses to 64-bit aligned locations > are 64-bit single-copy atomic as seen by translation table > walks and accesses to translation tables. > > then states that LPAE page tables must be stored in memory that such > page tables must be in memory that is capable of supporting 64-bit > single-copy atomic accesses. A similar statement is in the ARMv8 ARM (E2.2.1 in version G.a). > In Linux, we assume all RAM that the kernel has access to can contain > page tables. So by implication, all RAM that the kernel has access to > and exposes to userspace must be 64-bit single-copy atomic (if not, > we have a rather serious bug.) Indeed. We should assume that the SDRAM supports all the CPU features. > The remaining question is whether it would be sane for LDRD and STRD > to be single-copy atomic to translation table walkers but not to other > CPUs. Since Linux expects to be able to modify the page tables from > any CPU in the system, this requirement must hold, otherwise it's going > to be a really strangely designed system. The above statement does say "translation table walks and accesses to translation tables". The accesses can be LDRD/STRD instructions from other CPUs. Since the hardware can't tell whether the access is to a page table, the designers just made LDRD/STRD single-copy atomic. > I'd be interested to hear what Catalin and Will have to say on this, > but I suspect in practice, Arm systems that are running Linux with > LPAE (ARMv7+LPAE, ARMv8) will implement LDRD and STRD with 64-bit > single-copy atomic semantics. That's my understanding as well. In theory one could have a page table access from EL0, so it should be atomic. We could try to clarify E2.2.1 to simply state that naturally aligned LDRD/STRD are single-copy atomic without any subsequent statement on the translation table.
On Tue, Jul 13, 2021 at 07:13:02PM +0100, Catalin Marinas wrote: > We could try to clarify E2.2.1 to simply state that naturally aligned > LDRD/STRD are single-copy atomic without any subsequent statement on the > translation table. I think that clarification would be most helpful. Thanks.
On Wed, Jul 14, 2021 at 09:40:15AM +0100, Russell King (Oracle) wrote: > On Tue, Jul 13, 2021 at 07:13:02PM +0100, Catalin Marinas wrote: > > We could try to clarify E2.2.1 to simply state that naturally aligned > > LDRD/STRD are single-copy atomic without any subsequent statement on the > > translation table. > > I think that clarification would be most helpful. Thanks. Thanks for the suggestion and confirmation, Russell & Catalin. If so, I will implement the weak functions for compat_auxtrace_mmap__{read_head|write_tail}; and write the arm/arm64 specific functions with using LDRD/STRD instructions. For better patches organization, I will use a separate patch set for enabling the compat functions (in particular patches 10, 11/11) in the next spin. Thanks, Leo
diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c index 6a63be8b2430..d6fc250fbf97 100644 --- a/tools/perf/util/auxtrace.c +++ b/tools/perf/util/auxtrace.c @@ -1766,10 +1766,13 @@ static int __auxtrace_mmap__read(struct mmap *map, mm->prev = head; if (!snapshot) { - auxtrace_mmap__write_tail(mm, head); - if (itr->read_finish) { - int err; + int err; + err = auxtrace_mmap__write_tail(mm, head); + if (err < 0) + return err; + + if (itr->read_finish) { err = itr->read_finish(itr, mm->idx); if (err < 0) return err; diff --git a/tools/perf/util/auxtrace.h b/tools/perf/util/auxtrace.h index d68a5e80b217..66de7b6e65ec 100644 --- a/tools/perf/util/auxtrace.h +++ b/tools/perf/util/auxtrace.h @@ -18,6 +18,8 @@ #include <asm/bitsperlong.h> #include <asm/barrier.h> +#include "env.h" + union perf_event; struct perf_session; struct evlist; @@ -440,23 +442,111 @@ struct auxtrace_cache; #ifdef HAVE_AUXTRACE_SUPPORT +/* + * In the compat mode kernel runs in 64-bit and perf tool runs in 32-bit mode, + * 32-bit perf tool cannot access 64-bit value atomically, which might lead to + * the issues caused by the below sequence on multiple CPUs: when perf tool + * accesses either the load operation or the store operation for 64-bit value, + * on some architectures the operation is divided into two instructions, one + * is for accessing the low 32-bit value and another is for the high 32-bit; + * thus these two user operations can give the kernel chances to access the + * 64-bit value, and thus leads to the unexpected load values. + * + * kernel (64-bit) user (32-bit) + * + * if (LOAD ->aux_tail) { --, LOAD ->aux_head_lo + * STORE $aux_data | ,---> + * FLUSH $aux_data | | LOAD ->aux_head_hi + * STORE ->aux_head --|-------` smp_rmb() + * } | LOAD $data + * | smp_mb() + * | STORE ->aux_tail_lo + * `-----------> + * STORE ->aux_tail_hi + * + * For this reason, it's impossible for the perf tool to work correctly when + * the AUX head or tail is bigger than 4GB (more than 32 bits length); and we + * can not simply limit the AUX ring buffer to less than 4GB, the reason is + * the pointers can be increased monotonically (e.g in snapshot mode), whatever + * the buffer size it is, at the end the head and tail can be bigger than 4GB + * and carry out to the high 32-bit. + * + * To mitigate the issues and improve the user experience, we can allow the + * perf tool working in certain conditions and bail out with error if detect + * any overflow cannot be handled. + * + * For reading the AUX head, it reads out the values for three times, and + * compares the high 4 bytes of the values between the first time and the last + * time, if there has no change for high 4 bytes injected by the kernel during + * the user reading sequence, it's safe for use the second value. + * + * When update the AUX tail and detects any carrying in the high 32 bits, it + * means there have two store operations in user space and it cannot promise + * the atomicity for 64-bit write, so return '-1' in this case to tell the + * caller an overflow error has happened. + */ +static inline u64 compat_auxtrace_mmap__read_head(struct auxtrace_mmap *mm) +{ + struct perf_event_mmap_page *pc = mm->userpg; + u64 first, second, last; + u64 mask = (u64)(UINT32_MAX) << 32; + + do { + first = READ_ONCE(pc->aux_head); + /* Ensure all reads are done after we read the head */ + smp_rmb(); + second = READ_ONCE(pc->aux_head); + /* Ensure all reads are done after we read the head */ + smp_rmb(); + last = READ_ONCE(pc->aux_head); + } while ((first & mask) != (last & mask)); + + return second; +} + +static inline int compat_auxtrace_mmap__write_tail(struct auxtrace_mmap *mm, + u64 tail) +{ + struct perf_event_mmap_page *pc = mm->userpg; + u64 mask = (u64)(UINT32_MAX) << 32; + + if (tail & mask) + return -1; + + /* Ensure all reads are done before we write the tail out */ + smp_mb(); + WRITE_ONCE(pc->aux_tail, tail); + return 0; +} + static inline u64 auxtrace_mmap__read_head(struct auxtrace_mmap *mm) { struct perf_event_mmap_page *pc = mm->userpg; - u64 head = READ_ONCE(pc->aux_head); + u64 head; + +#if BITS_PER_LONG == 32 + if (kernel_is_64_bit) + return compat_auxtrace_mmap__read_head(mm); +#endif + head = READ_ONCE(pc->aux_head); /* Ensure all reads are done after we read the head */ smp_rmb(); return head; } -static inline void auxtrace_mmap__write_tail(struct auxtrace_mmap *mm, u64 tail) +static inline int auxtrace_mmap__write_tail(struct auxtrace_mmap *mm, u64 tail) { struct perf_event_mmap_page *pc = mm->userpg; +#if BITS_PER_LONG == 32 + if (kernel_is_64_bit) + return compat_auxtrace_mmap__write_tail(mm, tail); +#endif /* Ensure all reads are done before we write the tail out */ smp_mb(); WRITE_ONCE(pc->aux_tail, tail); + return 0; } int auxtrace_mmap__mmap(struct auxtrace_mmap *mm,
When perf runs in compat mode (kernel in 64-bit mode and the perf is in 32-bit mode), the 64-bit value atomicity in the user space cannot be assured, E.g. on some architectures, the 64-bit value accessing is split into two instructions, one is for the low 32-bit word accessing and another is for the high 32-bit word. This patch introduces two functions compat_auxtrace_mmap__read_head() and compat_auxtrace_mmap__write_tail(), as their naming indicates, when perf tool works in compat mode, it uses these two functions to access the AUX head and tail. These two functions can allow the perf tool to work properly in certain conditions, e.g. when perf tool works in snapshot mode with only using AUX head pointer, or perf tool uses the AUX buffer and the incremented tail is not bigger than 4GB. When perf tool cannot handle the case when the AUX tail is bigger than 4GB, the function compat_auxtrace_mmap__write_tail() returns -1 and tells the caller to bail out for the error. Suggested-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Leo Yan <leo.yan@linaro.org> --- tools/perf/util/auxtrace.c | 9 ++-- tools/perf/util/auxtrace.h | 94 +++++++++++++++++++++++++++++++++++++- 2 files changed, 98 insertions(+), 5 deletions(-)