Message ID | 20220517020326.18580-1-alisaidi@amazon.com (mailing list archive) |
---|---|
Headers | show |
Series | perf: arm-spe: Decode SPE source and use for perf c2c | expand |
Em Tue, May 17, 2022 at 02:03:21AM +0000, Ali Saidi escreveu: > When synthesizing data from SPE, augment the type with source information > for Arm Neoverse cores so we can detect situtions like cache line > contention and transfers on Arm platforms. > > This changes enables future changes to c2c on a system with SPE where lines that > are shared among multiple cores show up in perf c2c output. > > Changes is v9: > * Change reporting of remote socket data which should make Leo's upcomping > patch set for c2c make sense on multi-socket platforms Hey, Joe Mario, who is one of 'perf c2c' authors asked me about some git tree he could clone from for both building the kernel and tools/perf/ so that he could do tests, can you please provide that? thanks! - Arnaldo > Changes in v8: > * Report NA for both mem_lvl and mem_lvl_num for stores where we have no > information > > Changes in v7: > * Minor change requested by Leo Yan > > Changes in v6: > * Drop changes to c2c command which will come from Leo Yan > > Changes in v5: > * Add a new snooping type to disambiguate cache-to-cache transfers where > we don't know if the data is clean or dirty. > * Set snoop flags on all the data-source cases > * Special case stores as we have no information on them > > Changes in v4: > * Bring-in the kernel's arch/arm64/include/asm/cputype.h into tools/ > * Add neoverse-v1 to the neoverse cores list > > Ali Saidi (4): > tools: arm64: Import cputype.h > perf arm-spe: Use SPE data source for neoverse cores > perf mem: Support mem_lvl_num in c2c command > perf mem: Support HITM for when mem_lvl_num is any > > tools/arch/arm64/include/asm/cputype.h | 258 ++++++++++++++++++ > .../util/arm-spe-decoder/arm-spe-decoder.c | 1 + > .../util/arm-spe-decoder/arm-spe-decoder.h | 12 + > tools/perf/util/arm-spe.c | 110 +++++++- > tools/perf/util/mem-events.c | 20 +- > 5 files changed, 383 insertions(+), 18 deletions(-) > create mode 100644 tools/arch/arm64/include/asm/cputype.h > > -- > 2.32.0
Hi Arnaldo, On Tue, May 17, 2022 at 06:20:03PM -0300, Arnaldo Carvalho de Melo wrote: > Em Tue, May 17, 2022 at 02:03:21AM +0000, Ali Saidi escreveu: > > When synthesizing data from SPE, augment the type with source information > > for Arm Neoverse cores so we can detect situtions like cache line > > contention and transfers on Arm platforms. > > > > This changes enables future changes to c2c on a system with SPE where lines that > > are shared among multiple cores show up in perf c2c output. > > > > Changes is v9: > > * Change reporting of remote socket data which should make Leo's upcomping > > patch set for c2c make sense on multi-socket platforms > > Hey, > > Joe Mario, who is one of 'perf c2c' authors asked me about some > git tree he could clone from for both building the kernel and > tools/perf/ so that he could do tests, can you please provide that? Sure, I will prepare a git tree for testing and share with Joe. > thanks! Also thanks for your reminding. Leo
Hi Joe, On Tue, May 17, 2022 at 06:20:03PM -0300, Arnaldo Carvalho de Melo wrote: > Em Tue, May 17, 2022 at 02:03:21AM +0000, Ali Saidi escreveu: > > When synthesizing data from SPE, augment the type with source information > > for Arm Neoverse cores so we can detect situtions like cache line > > contention and transfers on Arm platforms. > > > > This changes enables future changes to c2c on a system with SPE where lines that > > are shared among multiple cores show up in perf c2c output. > > > > Changes is v9: > > * Change reporting of remote socket data which should make Leo's upcomping > > patch set for c2c make sense on multi-socket platforms > > Hey, > > Joe Mario, who is one of 'perf c2c' authors asked me about some > git tree he could clone from for both building the kernel and > tools/perf/ so that he could do tests, can you please provide that? I have uploaded the latest patches for enabling 'perf c2c' on Arm SPE on the repo: https://git.linaro.org/people/leo.yan/linux-spe.git branch: perf_c2c_arm_spe_peer_v3 Below are the quick notes for build the kernel with enabling Arm SPE: $ git clone -b perf_c2c_arm_spe_peer_v3 https://git.linaro.org/people/leo.yan/linux-spe.git Or $ git clone -b perf_c2c_arm_spe_peer_v3 ssh://git@git.linaro.org/people/leo.yan/linux-spe.git $ cd linux-spe # Build kernel $ make defconfig $ ./scripts/config -e CONFIG_PID_IN_CONTEXTIDR $ ./scripts/config -e CONFIG_ARM_SPE_PMU $ make Image # Build perf $ cd tools/perf $ make VF=1 DEBUG=1 When boot the kernel, please add option "kpti=off" in kernel command line, you might need to update grub menu for this. Please feel free let us know if anything is not clear for you. Thank you, Leo
On 5/18/22 12:16 AM, Leo Yan wrote: > Hi Joe, > > On Tue, May 17, 2022 at 06:20:03PM -0300, Arnaldo Carvalho de Melo wrote: >> Em Tue, May 17, 2022 at 02:03:21AM +0000, Ali Saidi escreveu: >>> When synthesizing data from SPE, augment the type with source information >>> for Arm Neoverse cores so we can detect situtions like cache line >>> contention and transfers on Arm platforms. >>> >>> This changes enables future changes to c2c on a system with SPE where lines that >>> are shared among multiple cores show up in perf c2c output. >>> >>> Changes is v9: >>> * Change reporting of remote socket data which should make Leo's upcomping >>> patch set for c2c make sense on multi-socket platforms >> >> Hey, >> >> Joe Mario, who is one of 'perf c2c' authors asked me about some >> git tree he could clone from for both building the kernel and >> tools/perf/ so that he could do tests, can you please provide that? > > I have uploaded the latest patches for enabling 'perf c2c' on Arm SPE > on the repo: > > https://git.linaro.org/people/leo.yan/linux-spe.git branch: perf_c2c_arm_spe_peer_v3 > > Below are the quick notes for build the kernel with enabling Arm SPE: > > $ git clone -b perf_c2c_arm_spe_peer_v3 https://git.linaro.org/people/leo.yan/linux-spe.git > > Or > > $ git clone -b perf_c2c_arm_spe_peer_v3 ssh://git@git.linaro.org/people/leo.yan/linux-spe.git > > $ cd linux-spe > > # Build kernel > $ make defconfig > $ ./scripts/config -e CONFIG_PID_IN_CONTEXTIDR > $ ./scripts/config -e CONFIG_ARM_SPE_PMU > $ make Image > > # Build perf > $ cd tools/perf > $ make VF=1 DEBUG=1 > > When boot the kernel, please add option "kpti=off" in kernel command > line, you might need to update grub menu for this. > > Please feel free let us know if anything is not clear for you. > > Thank you, > Leo > Hi Leo: Thanks for getting this working on ARM. I do have a few comments. I built and ran this on a ARM Neoverse-N1 system with 2 numa nodes. Comment 1: When I run "perf c2c report", the "Node" field is marked "N/A". It's supposed to show the numa node where the data address for the cacheline resides. That's important both to see what node hot data resides on and if that data is getting lots of cross-numa accesses. Comment 2: I'm assuming you're identifying the contended cachelines using the "peer" load response, which indicates the load was resolved from a "peer" cpu's cacheline. Please confirm. If that's true, is it possible to identify if that "peer" response was on the local or remote numa node? I ask because being able to identify both local and remote HitM's on Intel X86_64 has been quite valuable. That's because remote HitM's are costly and because it helps the viewer see if they need to optimize their cpu affinity or what node their hot data resides on. Last Comment: There's a row in the Pareto table that has incorrect column alignment. Look at row 80 below in the truncated snipit of output. It has an extra field inserted in it at the beginning. I also show what the corrected output should look like. Incorrect row 80: 71 ================================================= 72 Shared Cache Line Distribution Pareto 73 ================================================= 74 # 75 # ----- HITM ----- Snoop ------- Store Refs ------ ------- CL -------- 76 # RmtHitm LclHitm Peer L1 Hit L1 Miss N/A Off Node PA cnt Code address 77 # ....... ....... ....... ....... ....... ....... ..... .... ...... .................. 78 # 79 ------------------------------------------------------------------------------- 80 0 0 0 4648 0 0 11572 0x422140 81 ------------------------------------------------------------------------------- 82 0.00% 0.00% 0.00% 0.00% 0.00% 44.47% 0x0 N/A 0 0x400ce8 83 0.00% 0.00% 10.26% 0.00% 0.00% 0.00% 0x0 N/A 0 0x400e48 84 0.00% 0.00% 0.00% 0.00% 0.00% 55.53% 0x0 N/A 0 0x400e54 85 0.00% 0.00% 89.74% 0.00% 0.00% 0.00% 0x8 N/A 0 0x401038 Corrected row 80: 71 ================================================= 72 Shared Cache Line Distribution Pareto 73 ================================================= 74 # 75 # ----- HITM ----- Snoop ------- Store Refs ----- ------- CL -------- 76 # RmtHitm LclHitm Peer L1 Hit L1 Miss N/A Off Node PA cnt Code address 77 # ....... ....... ....... ....... ....... ...... ..... .... ...... .................. 78 # 79 ------------------------------------------------------------------------------- 80 0 0 4648 0 0 11572 0x422140 81 ------------------------------------------------------------------------------- 82 0.00% 0.00% 0.00% 0.00% 0.00% 44.47% 0x0 N/A 0 0x400ce8 83 0.00% 0.00% 10.26% 0.00% 0.00% 0.00% 0x0 N/A 0 0x400e48 84 0.00% 0.00% 0.00% 0.00% 0.00% 55.53% 0x0 N/A 0 0x400e54 85 0.00% 0.00% 89.74% 0.00% 0.00% 0.00% 0x8 N/A 0 0x401038 Thanks again for doing this. Joe
Hi Joe, On Thu, May 19, 2022 at 11:16:53AM -0400, Joe Mario wrote: [...] > Hi Leo: > Thanks for getting this working on ARM. I do have a few comments. > > I built and ran this on a ARM Neoverse-N1 system with 2 numa nodes. > > Comment 1: > When I run "perf c2c report", the "Node" field is marked "N/A". It's supposed to show the numa node where the data address for the cacheline resides. That's important both to see what node hot data resides on and if that data is getting lots of cross-numa accesses. Good catching. Will fix it. > Comment 2: > I'm assuming you're identifying the contended cachelines using the "peer" load response, which indicates the load was resolved from a "peer" cpu's cacheline. Please confirm. Yeah, "peer" is ambiguous. AFAIK, "peer" load can come from: - Local node which in peer CPU's cache (can be same cluster or peer cluster); - Remove ndoe which in CPU's cache line, or even from *remote DRAM*. > If that's true, is it possible to identify if that "peer" response was on the local or remote numa node? Good point. Yes, we can do this. So far, the remote accesses are accounted in the metric "rmt_hit", it should be same with the remote peer load; but so far we have no a metric to account local peer loads, it would be not hard to add metric "lcl_peer". > I ask because being able to identify both local and remote HitM's on Intel X86_64 has been quite valuable. That's because remote HitM's are costly and because it helps the viewer see if they need to optimize their cpu affinity or what node their hot data resides on. Thanks a lot for the info. This means at least I should refine the shared cache line distribution pareto for remote peer access, will do some experiment for the enhancement. > Last Comment: > There's a row in the Pareto table that has incorrect column alignment. > Look at row 80 below in the truncated snipit of output. It has an extra field inserted in it at the beginning. > I also show what the corrected output should look like. > > Incorrect row 80: > 71 ================================================= > 72 Shared Cache Line Distribution Pareto > 73 ================================================= > 74 # > 75 # ----- HITM ----- Snoop ------- Store Refs ------ ------- CL -------- > 76 # RmtHitm LclHitm Peer L1 Hit L1 Miss N/A Off Node PA cnt Code address > 77 # ....... ....... ....... ....... ....... ....... ..... .... ...... .................. > 78 # > 79 ------------------------------------------------------------------------------- > 80 0 0 0 4648 0 0 11572 0x422140 > 81 ------------------------------------------------------------------------------- > 82 0.00% 0.00% 0.00% 0.00% 0.00% 44.47% 0x0 N/A 0 0x400ce8 > 83 0.00% 0.00% 10.26% 0.00% 0.00% 0.00% 0x0 N/A 0 0x400e48 > 84 0.00% 0.00% 0.00% 0.00% 0.00% 55.53% 0x0 N/A 0 0x400e54 > 85 0.00% 0.00% 89.74% 0.00% 0.00% 0.00% 0x8 N/A 0 0x401038 > > > Corrected row 80: > 71 ================================================= > 72 Shared Cache Line Distribution Pareto > 73 ================================================= > 74 # > 75 # ----- HITM ----- Snoop ------- Store Refs ----- ------- CL -------- > 76 # RmtHitm LclHitm Peer L1 Hit L1 Miss N/A Off Node PA cnt Code address > 77 # ....... ....... ....... ....... ....... ...... ..... .... ...... .................. > 78 # > 79 ------------------------------------------------------------------------------- > 80 0 0 4648 0 0 11572 0x422140 > 81 ------------------------------------------------------------------------------- > 82 0.00% 0.00% 0.00% 0.00% 0.00% 44.47% 0x0 N/A 0 0x400ce8 > 83 0.00% 0.00% 10.26% 0.00% 0.00% 0.00% 0x0 N/A 0 0x400e48 > 84 0.00% 0.00% 0.00% 0.00% 0.00% 55.53% 0x0 N/A 0 0x400e54 > 85 0.00% 0.00% 89.74% 0.00% 0.00% 0.00% 0x8 N/A 0 0x401038 Hmm‥. At my side, I used below command to output pareto view, but I cannot see the conlumn "CL", the conlumn "CL" is only shown for TUI mode but not for the mode "--stdio". Could you share the method for how to reproduce this issue? $ ./perf c2c report -i perf.data.v3 -N ================================================= Shared Cache Line Distribution Pareto ================================================= # # ----- HITM ----- Snoop ------- Store Refs ------ --------- Data address --------- --------------- cycles --------------- Total cpu Shared # Num RmtHitm LclHitm Peer L1 Hit L1 Miss N/A Offset Node PA cnt Code address rmt hitm lcl hitm load peer records cnt Symbol Object Source:Line Node{cpus %peers %stores} # ..... ....... ....... ....... ....... ....... ....... .................. .... ...... .................. ........ ........ ........ ........ ....... ........ ...................... ................. ...................... .... # ------------------------------------------------------------------------------- 0 0 0 56183 0 0 26534 0x420180 ------------------------------------------------------------------------------- 0.00% 0.00% 99.85% 0.00% 0.00% 0.00% 0x0 N/A 0 0x400bd0 0 0 1587 4034 188785 2 [.] 0x0000000000000bd0 false_sharing.exe false_sharing.exe[bd0] 0{ 1 87.4% n/a} 1{ 1 12.6% n/a} 0.00% 0.00% 0.00% 0.00% 0.00% 54.56% 0x0 N/A 0 0x400bd4 0 0 0 0 14476 2 [.] 0x0000000000000bd4 false_sharing.exe false_sharing.exe[bd4] 0{ 1 n/a 0.2%} 1{ 1 n/a 99.8%} 0.00% 0.00% 0.00% 0.00% 0.00% 45.44% 0x0 N/A 0 0x400bf8 0 0 0 0 12058 2 [.] 0x0000000000000bf8 false_sharing.exe false_sharing.exe[bf8] 0{ 1 n/a 70.3%} 1{ 1 n/a 29.7%} 0.00% 0.00% 0.15% 0.00% 0.00% 0.00% 0x20 N/A 0 0x400c64 0 0 2462 2451 4835 2 [.] 0x0000000000000c64 false_sharing.exe false_sharing.exe[c64] 0{ 1 11.9% n/a} 1{ 1 88.1% n/a} ------------------------------------------------------------------------------- 1 0 0 2571 0 0 69861 0x420100 ------------------------------------------------------------------------------- 0.00% 0.00% 0.00% 0.00% 0.00% 100.00% 0x8 N/A 0 0x400c08 0 0 0 0 69861 2 [.] 0x0000000000000c08 false_sharing.exe false_sharing.exe[c08] 0{ 1 n/a 62.1%} 1{ 1 n/a 37.9%} 0.00% 0.00% 100.00% 0.00% 0.00% 0.00% 0x20 N/A 0 0x400c74 0 0 834 641 6576 2 [.] 0x0000000000000c74 false_sharing.exe false_sharing.exe[c74] 0{ 1 93.2% n/a} 1{ 1 6.8% n/a} Very appreciate your testing and suggestions! Leo
On 5/22/22 2:15 AM, Leo Yan wrote: > Hi Joe, > > On Thu, May 19, 2022 at 11:16:53AM -0400, Joe Mario wrote: > > [SNIP] > >> Last Comment: >> There's a row in the Pareto table that has incorrect column alignment. >> Look at row 80 below in the truncated snipit of output. It has an extra field inserted in it at the beginning. >> I also show what the corrected output should look like. >> >> Incorrect row 80: >> 71 ================================================= >> 72 Shared Cache Line Distribution Pareto >> 73 ================================================= >> 74 # >> 75 # ----- HITM ----- Snoop ------- Store Refs ------ ------- CL -------- >> 76 # RmtHitm LclHitm Peer L1 Hit L1 Miss N/A Off Node PA cnt Code address >> 77 # ....... ....... ....... ....... ....... ....... ..... .... ...... .................. >> 78 # >> 79 ------------------------------------------------------------------------------- >> 80 0 0 0 4648 0 0 11572 0x422140 >> 81 ------------------------------------------------------------------------------- >> 82 0.00% 0.00% 0.00% 0.00% 0.00% 44.47% 0x0 N/A 0 0x400ce8 >> 83 0.00% 0.00% 10.26% 0.00% 0.00% 0.00% 0x0 N/A 0 0x400e48 >> 84 0.00% 0.00% 0.00% 0.00% 0.00% 55.53% 0x0 N/A 0 0x400e54 >> 85 0.00% 0.00% 89.74% 0.00% 0.00% 0.00% 0x8 N/A 0 0x401038 >> >> >> Corrected row 80: >> 71 ================================================= >> 72 Shared Cache Line Distribution Pareto >> 73 ================================================= >> 74 # >> 75 # ----- HITM ----- Snoop ------- Store Refs ----- ------- CL -------- >> 76 # RmtHitm LclHitm Peer L1 Hit L1 Miss N/A Off Node PA cnt Code address >> 77 # ....... ....... ....... ....... ....... ...... ..... .... ...... .................. >> 78 # >> 79 ------------------------------------------------------------------------------- >> 80 0 0 4648 0 0 11572 0x422140 >> 81 ------------------------------------------------------------------------------- >> 82 0.00% 0.00% 0.00% 0.00% 0.00% 44.47% 0x0 N/A 0 0x400ce8 >> 83 0.00% 0.00% 10.26% 0.00% 0.00% 0.00% 0x0 N/A 0 0x400e48 >> 84 0.00% 0.00% 0.00% 0.00% 0.00% 55.53% 0x0 N/A 0 0x400e54 >> 85 0.00% 0.00% 89.74% 0.00% 0.00% 0.00% 0x8 N/A 0 0x401038 > > Hmm‥. At my side, I used below command to output pareto view, but I > cannot see the conlumn "CL", the conlumn "CL" is only shown for TUI > mode but not for the mode "--stdio". Could you share the method for > how to reproduce this issue? Hi Leo: I figured out why my output was different than yours. I did not have the slang-devel rpm installed on the host system. In my original perf build, I missed the this output in the build log: > slang not found, disables TUI support. Please install slang-devel, libslang-dev or libslang2-dev Once I installed slang-devel, rebuilt perf, and then reran my test, the pareto output looked fine. When the TUI support is disabled, it shouldn't corrupt the resulting stdio output. I don't believe this has anything to do with your commits. Last, it looks like you should update the help text for the display flag options to reflect your new peer option. Currently it says: -d, --display <Switch HITM output type> lcl,rmt But since you added the "peer" display, shouldn't the output for that help text state: -d, --display <Switch HITM output type> lcl,rmt,peer Joe
Hi Joe, On Mon, May 23, 2022 at 01:24:32PM -0400, Joe Mario wrote: [...] > Hi Leo: > I figured out why my output was different than yours. > > I did not have the slang-devel rpm installed on the host system. > > In my original perf build, I missed the this output in the build log: > > slang not found, disables TUI support. Please install slang-devel, libslang-dev or libslang2-dev > > Once I installed slang-devel, rebuilt perf, and then reran my test, the pareto output looked fine. > > When the TUI support is disabled, it shouldn't corrupt the resulting stdio output. I don't believe this has anything to do with your commits. Thanks for taking time to hunt this issue. I checked the code and sent out a patch to fix the stdio interface if slang lib is not installed. Please see the patch: https://lore.kernel.org/lkml/20220526143917.607928-1-leo.yan@linaro.org/T/#u > Last, it looks like you should update the help text for the display flag options to reflect your new peer option. > Currently it says: > -d, --display <Switch HITM output type> > lcl,rmt > > But since you added the "peer" display, shouldn't the output for that help text state: > -d, --display <Switch HITM output type> > lcl,rmt,peer Yeah, will fix. Very appreciate for your detailed testing and suggestions. Leo