diff mbox series

[v2,2/2] perf mem: Support HITM for when mem_lvl_num is used

Message ID 20220221224807.18172-2-alisaidi@amazon.com (mailing list archive)
State New, archived
Headers show
Series perf: arm-spe: Decode SPE source and use for perf c2c | expand

Commit Message

Ali Saidi Feb. 21, 2022, 10:48 p.m. UTC
Current code only support HITM statistics for last level cache (LLC)
when mem_lvl encodes the level. On existing Arm64 machines there are as
many as four levels cache and this change supports decoding l1, l2, and
llc hits from the mem_lvl_num data. Given that the mem_lvl namespace is
being deprecated take this opportunity to encode the neoverse data into
mem_lvl_num.

For loads that hit in a the LLC snoop filter and are fullfilled from a
higher level cache, it's not usually clear what the true level of the
cache the data came from (i.e. a transfer from a core could come from
it's L1 or L2). Instead of making an assumption of where the line came
from, add support for incrementing HITM if the source is CACHE_ANY.

Since other architectures don't seem to populate the mem_lvl_num field
here there shouldn't be a change in functionality.

Signed-off-by: Ali Saidi <alisaidi@amazon.com>
---
 tools/perf/util/mem-events.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

Comments

German Gomez March 2, 2022, 3:39 p.m. UTC | #1
On 21/02/2022 22:48, Ali Saidi wrote:
> Current code only support HITM statistics for last level cache (LLC)
> when mem_lvl encodes the level. On existing Arm64 machines there are as
> many as four levels cache and this change supports decoding l1, l2, and
> llc hits from the mem_lvl_num data. Given that the mem_lvl namespace is
> being deprecated take this opportunity to encode the neoverse data into
> mem_lvl_num.

Since Neoverse is mentioned in the commit message, I think there should be a comment somewhere in the code as well.

>
> For loads that hit in a the LLC snoop filter and are fullfilled from a
> higher level cache, it's not usually clear what the true level of the
> cache the data came from (i.e. a transfer from a core could come from
> it's L1 or L2). Instead of making an assumption of where the line came
> from, add support for incrementing HITM if the source is CACHE_ANY.
>
> Since other architectures don't seem to populate the mem_lvl_num field
> here there shouldn't be a change in functionality.
>
> Signed-off-by: Ali Saidi <alisaidi@amazon.com>
> ---
>  tools/perf/util/mem-events.c | 14 ++++++++++----
>  1 file changed, 10 insertions(+), 4 deletions(-)
>
> diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
> index ed0ab838bcc5..6c3fd4aac7ae 100644
> --- a/tools/perf/util/mem-events.c
> +++ b/tools/perf/util/mem-events.c
> @@ -485,6 +485,7 @@ int c2c_decode_stats(struct c2c_stats *stats, struct mem_info *mi)
>  	u64 daddr  = mi->daddr.addr;
>  	u64 op     = data_src->mem_op;
>  	u64 lvl    = data_src->mem_lvl;
> +	u64 lnum   = data_src->mem_lvl_num;
>  	u64 snoop  = data_src->mem_snoop;
>  	u64 lock   = data_src->mem_lock;
>  	u64 blk    = data_src->mem_blk;
> @@ -527,16 +528,18 @@ do {				\
>  			if (lvl & P(LVL, UNC)) stats->ld_uncache++;
>  			if (lvl & P(LVL, IO))  stats->ld_io++;
>  			if (lvl & P(LVL, LFB)) stats->ld_fbhit++;
> -			if (lvl & P(LVL, L1 )) stats->ld_l1hit++;
> -			if (lvl & P(LVL, L2 )) stats->ld_l2hit++;
> -			if (lvl & P(LVL, L3 )) {
> +			if (lvl & P(LVL, L1) || lnum == P(LVLNUM, L1))
> +				stats->ld_l1hit++;
> +			if (lvl & P(LVL, L2) || lnum == P(LVLNUM, L2))
> +				stats->ld_l2hit++;
> +			if (lvl & P(LVL, L3) || lnum == P(LVLNUM, L4)) {

According to a comment in the previous patch, using L4 is specific to Neoverse, right?

Maybe we need to distinguish the Neoverse case from the generic one here as well

if (is_neoverse)
// treat L4 as llc
else
// treat L3 as llc

>  				if (snoop & P(SNOOP, HITM))
>  					HITM_INC(lcl_hitm);
>  				else
>  					stats->ld_llchit++;
>  			}
>  
> -			if (lvl & P(LVL, LOC_RAM)) {
> +			if (lvl & P(LVL, LOC_RAM) || lnum == P(LVLNUM, RAM)) {
>  				stats->lcl_dram++;
>  				if (snoop & P(SNOOP, HIT))
>  					stats->ld_shared++;
> @@ -564,6 +567,9 @@ do {				\
>  				HITM_INC(rmt_hitm);
>  		}
>  
> +		if (lnum == P(LVLNUM, ANY_CACHE) && snoop & P(SNOOP, HITM))
> +			HITM_INC(lcl_hitm);
> +
>  		if ((lvl & P(LVL, MISS)))
>  			stats->ld_miss++;
>
Leo Yan March 13, 2022, 12:44 p.m. UTC | #2
On Wed, Mar 02, 2022 at 03:39:04PM +0000, German Gomez wrote:
> 
> On 21/02/2022 22:48, Ali Saidi wrote:
> > Current code only support HITM statistics for last level cache (LLC)
> > when mem_lvl encodes the level. On existing Arm64 machines there are as
> > many as four levels cache and this change supports decoding l1, l2, and
> > llc hits from the mem_lvl_num data. Given that the mem_lvl namespace is
> > being deprecated take this opportunity to encode the neoverse data into
> > mem_lvl_num.
> 
> Since Neoverse is mentioned in the commit message, I think there should be a comment somewhere in the code as well.
>

> > For loads that hit in a the LLC snoop filter and are fullfilled from a
> > higher level cache, it's not usually clear what the true level of the
> > cache the data came from (i.e. a transfer from a core could come from
> > it's L1 or L2). Instead of making an assumption of where the line came
> > from, add support for incrementing HITM if the source is CACHE_ANY.
> >
> > Since other architectures don't seem to populate the mem_lvl_num field
> > here there shouldn't be a change in functionality.
> >
> > Signed-off-by: Ali Saidi <alisaidi@amazon.com>
> > ---
> >  tools/perf/util/mem-events.c | 14 ++++++++++----
> >  1 file changed, 10 insertions(+), 4 deletions(-)
> >
> > diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
> > index ed0ab838bcc5..6c3fd4aac7ae 100644
> > --- a/tools/perf/util/mem-events.c
> > +++ b/tools/perf/util/mem-events.c
> > @@ -485,6 +485,7 @@ int c2c_decode_stats(struct c2c_stats *stats, struct mem_info *mi)
> >  	u64 daddr  = mi->daddr.addr;
> >  	u64 op     = data_src->mem_op;
> >  	u64 lvl    = data_src->mem_lvl;
> > +	u64 lnum   = data_src->mem_lvl_num;
> >  	u64 snoop  = data_src->mem_snoop;
> >  	u64 lock   = data_src->mem_lock;
> >  	u64 blk    = data_src->mem_blk;
> > @@ -527,16 +528,18 @@ do {				\
> >  			if (lvl & P(LVL, UNC)) stats->ld_uncache++;
> >  			if (lvl & P(LVL, IO))  stats->ld_io++;
> >  			if (lvl & P(LVL, LFB)) stats->ld_fbhit++;
> > -			if (lvl & P(LVL, L1 )) stats->ld_l1hit++;
> > -			if (lvl & P(LVL, L2 )) stats->ld_l2hit++;
> > -			if (lvl & P(LVL, L3 )) {
> > +			if (lvl & P(LVL, L1) || lnum == P(LVLNUM, L1))
> > +				stats->ld_l1hit++;
> > +			if (lvl & P(LVL, L2) || lnum == P(LVLNUM, L2))
> > +				stats->ld_l2hit++;

It's good to split into two patches: one patch is to add statistics for
field 'mem_lvl_num', the second patch is to handle HITM tags.

> > +			if (lvl & P(LVL, L3) || lnum == P(LVLNUM, L4)) {

It's a bit weird that we take either PERF_MEM_LVL_L3 or
PERF_MEM_LVLNUM_L4 as the last level local cache in the same condition
checking.

> According to a comment in the previous patch, using L4 is specific to Neoverse, right?
> 
> Maybe we need to distinguish the Neoverse case from the generic one here as well
> 
> if (is_neoverse)
> // treat L4 as llc
> else
> // treat L3 as llc

I personally think it's not good idea to distinguish platforms in the decoding code.

To make more more clear statistics, we can firstly increment hit values
for every level cache respectively;  so we can consider to adde two
extra statistics items 'stats->ld_l3hit' and 'stats->ld_l4hit'.

        if (lvl & P(LVL, L3) || lnum == P(LVLNUM, L3))
                stats->ld_l3hit++;
        if (lnum == P(LVLNUM, L4))
                stats->ld_l4hit++;

> >  				if (snoop & P(SNOOP, HITM))
> >  					HITM_INC(lcl_hitm);
> >  				else
> >  					stats->ld_llchit++;

For the statistics of 'ld_llchit' and 'lcl_hitm', please see below comment.

> >  			}
> >  
> > -			if (lvl & P(LVL, LOC_RAM)) {
> > +			if (lvl & P(LVL, LOC_RAM) || lnum == P(LVLNUM, RAM)) {
> >  				stats->lcl_dram++;
> >  				if (snoop & P(SNOOP, HIT))
> >  					stats->ld_shared++;
> > @@ -564,6 +567,9 @@ do {				\
> >  				HITM_INC(rmt_hitm);
> >  		}
> >  
> > +		if (lnum == P(LVLNUM, ANY_CACHE) && snoop & P(SNOOP, HITM))
> > +			HITM_INC(lcl_hitm);
> > +

The condition checking of "lnum == P(LVLNUM, ANY_CACHE)" is a bit
suspecious and it might be fragile for support multiple archs.

So I am just wandering if it's possible that we add a new field
'llc_level' in the structure 'mem_info', we can initialize this field
based on different memory hardware events (e.g. Intel mem event,
Arm SPE, etc).  During the decoding phase, the local last level cache
is dynamically set to 'mem_info:: llc_level', we can base on it to
increment 'ld_llchit' and 'lcl_hitm', the code is like below:

                 if ((lvl & P(LVL, REM_CCE1)) ||
                     (lvl & P(LVL, REM_CCE2)) ||
                      mrem) {
                         if (snoop & P(SNOOP, HIT))
                                 stats->rmt_hit++;
                         else if (snoop & P(SNOOP, HITM))
                                 HITM_INC(rmt_hitm);
+               } else {
+                       if ((snoop & P(SNOOP, HIT)) && (lnum == mi->llc_level))
+                               stats->ld_llchit++;
+                       else if (snoop & P(SNOOP, HITM))
+                               HITM_INC(lcl_hitm);
                 }

Thanks,
Leo

> >  		if ((lvl & P(LVL, MISS)))
> >  			stats->ld_miss++;
> >
Ali Saidi March 13, 2022, 7:19 p.m. UTC | #3
Hi Leo,

On Sun, 13 Mar 2022 12:46:02 +0000, Leo Yan wrote:
> On Wed, Mar 02, 2022 at 03:39:04PM +0000, German Gomez wrote:
> > 
> > On 21/02/2022 22:48, Ali Saidi wrote:
> > > Current code only support HITM statistics for last level cache (LLC)
> > > when mem_lvl encodes the level. On existing Arm64 machines there are as
> > > many as four levels cache and this change supports decoding l1, l2, and
> > > llc hits from the mem_lvl_num data. Given that the mem_lvl namespace is
> > > being deprecated take this opportunity to encode the neoverse data into
> > > mem_lvl_num.
> > 
> > Since Neoverse is mentioned in the commit message, I think there should be a comment somewhere in the code as well.
> >
> 
> > > For loads that hit in a the LLC snoop filter and are fullfilled from a
> > > higher level cache, it's not usually clear what the true level of the
> > > cache the data came from (i.e. a transfer from a core could come from
> > > it's L1 or L2). Instead of making an assumption of where the line came
> > > from, add support for incrementing HITM if the source is CACHE_ANY.
> > >
> > > Since other architectures don't seem to populate the mem_lvl_num field
> > > here there shouldn't be a change in functionality.
> > >
> > > Signed-off-by: Ali Saidi <alisaidi@amazon.com>
> > > ---
> > >  tools/perf/util/mem-events.c | 14 ++++++++++----
> > >  1 file changed, 10 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
> > > index ed0ab838bcc5..6c3fd4aac7ae 100644
> > > --- a/tools/perf/util/mem-events.c
> > > +++ b/tools/perf/util/mem-events.c
> > > @@ -485,6 +485,7 @@ int c2c_decode_stats(struct c2c_stats *stats, struct mem_info *mi)
> > >  	u64 daddr  = mi->daddr.addr;
> > >  	u64 op     = data_src->mem_op;
> > >  	u64 lvl    = data_src->mem_lvl;
> > > +	u64 lnum   = data_src->mem_lvl_num;
> > >  	u64 snoop  = data_src->mem_snoop;
> > >  	u64 lock   = data_src->mem_lock;
> > >  	u64 blk    = data_src->mem_blk;
> > > @@ -527,16 +528,18 @@ do {				\
> > >  			if (lvl & P(LVL, UNC)) stats->ld_uncache++;
> > >  			if (lvl & P(LVL, IO))  stats->ld_io++;
> > >  			if (lvl & P(LVL, LFB)) stats->ld_fbhit++;
> > > -			if (lvl & P(LVL, L1 )) stats->ld_l1hit++;
> > > -			if (lvl & P(LVL, L2 )) stats->ld_l2hit++;
> > > -			if (lvl & P(LVL, L3 )) {
> > > +			if (lvl & P(LVL, L1) || lnum == P(LVLNUM, L1))
> > > +				stats->ld_l1hit++;
> > > +			if (lvl & P(LVL, L2) || lnum == P(LVLNUM, L2))
> > > +				stats->ld_l2hit++;
> 
> It's good to split into two patches: one patch is to add statistics for
> field 'mem_lvl_num', the second patch is to handle HITM tags.
> 
> > > +			if (lvl & P(LVL, L3) || lnum == P(LVLNUM, L4)) {
> 
> It's a bit weird that we take either PERF_MEM_LVL_L3 or
> PERF_MEM_LVLNUM_L4 as the last level local cache in the same condition
> checking.
> 
> > According to a comment in the previous patch, using L4 is specific to Neoverse, right?
> > 
> > Maybe we need to distinguish the Neoverse case from the generic one here as well
> > 
> > if (is_neoverse)
> > // treat L4 as llc
> > else
> > // treat L3 as llc
> 
> I personally think it's not good idea to distinguish platforms in the decoding code.

I agree here. The more we talk about this, the more I'm wondering if we're
spending too much code solving a problem that doesn't exist. I know of no
Neoverse systems that actually have 4 cache levels, they all actually have three
even though it's technically possible to have four.  I have some doubts anyone
will actually build four levels of cache and perhaps the most prudent path here
is to assume only three levels (and adjust the previous patch) until someone 
actually produces a system with four levels instead of a lot of code that is
never actually exercised?

Thanks,
Ali
Leo Yan March 14, 2022, 6:33 a.m. UTC | #4
On Sun, Mar 13, 2022 at 07:19:33PM +0000, Ali Saidi wrote:

[...]

> > > > +			if (lvl & P(LVL, L3) || lnum == P(LVLNUM, L4)) {
> > >
> > > According to a comment in the previous patch, using L4 is specific to Neoverse, right?
> > > 
> > > Maybe we need to distinguish the Neoverse case from the generic one here as well
> > > 
> > > if (is_neoverse)
> > > // treat L4 as llc
> > > else
> > > // treat L3 as llc
> > 
> > I personally think it's not good idea to distinguish platforms in the decoding code.
> 
> I agree here. The more we talk about this, the more I'm wondering if we're
> spending too much code solving a problem that doesn't exist. I know of no
> Neoverse systems that actually have 4 cache levels, they all actually have three
> even though it's technically possible to have four.  I have some doubts anyone
> will actually build four levels of cache and perhaps the most prudent path here
> is to assume only three levels (and adjust the previous patch) until someone 
> actually produces a system with four levels instead of a lot of code that is
> never actually exercised?

I am not right person to say L4 cache is not implemented in Neoverse
platforms; my guess for a "System cache" data source might be L3 or
L4 and it is a implementation dependent.  Maybe German or Arm mates
could confirm for this.

Thanks,
Leo
German Gomez March 14, 2022, 6 p.m. UTC | #5
Hi Leo, Ali

On 14/03/2022 06:33, Leo Yan wrote:
> On Sun, Mar 13, 2022 at 07:19:33PM +0000, Ali Saidi wrote:
>
> [...]
>
>>>>> +			if (lvl & P(LVL, L3) || lnum == P(LVLNUM, L4)) {
>>>> According to a comment in the previous patch, using L4 is specific to Neoverse, right?
>>>>
>>>> Maybe we need to distinguish the Neoverse case from the generic one here as well
>>>>
>>>> if (is_neoverse)
>>>> // treat L4 as llc
>>>> else
>>>> // treat L3 as llc
>>> I personally think it's not good idea to distinguish platforms in the decoding code.
>> I agree here. The more we talk about this, the more I'm wondering if we're
>> spending too much code solving a problem that doesn't exist. I know of no
>> Neoverse systems that actually have 4 cache levels, they all actually have three
>> even though it's technically possible to have four.  I have some doubts anyone
>> will actually build four levels of cache and perhaps the most prudent path here
>> is to assume only three levels (and adjust the previous patch) until someone 
>> actually produces a system with four levels instead of a lot of code that is
>> never actually exercised?
> I am not right person to say L4 cache is not implemented in Neoverse
> platforms; my guess for a "System cache" data source might be L3 or
> L4 and it is a implementation dependent.  Maybe German or Arm mates
> could confirm for this.

I had a look at the TRMs for the N1[1], V1[2] and N2[3] Neoverse cores
(specifically the LL_CACHE_RD pmu events). If we were to assign a number
to the system cache (assuming all caches are implemented):

*For N1*, if L2 and L3 are implemented, system cache would follow at *L4*

*For V1 and N2*, if L2 is implemented, system cache would follow at *L3*
(these don't seem to have the same/similar per-cluster L3 cache from the N1)

There's also room in the PERF_MEM_LVLNUM_* namespace for a SYSTEM value,
if we want to consider that option as well.

[1] https://developer.arm.com/documentation/100616/0401/?lang=en
[2] https://developer.arm.com/documentation/101427/0101/?lang=en
[3] https://developer.arm.com/documentation/102099/0001/?lang=en

>
> Thanks,
> Leo
Ali Saidi March 14, 2022, 6:37 p.m. UTC | #6
Hi German and Leo,

On   Mon, 14 Mar 2022 18:00:13 +0000, German Gomez wrote:
> Hi Leo, Ali
> 
> On 14/03/2022 06:33, Leo Yan wrote:
> > On Sun, Mar 13, 2022 at 07:19:33PM +0000, Ali Saidi wrote:
> >
> > [...]
> >
> >>>>> +			if (lvl & P(LVL, L3) || lnum == P(LVLNUM, L4)) {
> >>>> According to a comment in the previous patch, using L4 is specific to Neoverse, right?
> >>>>
> >>>> Maybe we need to distinguish the Neoverse case from the generic one here as well
> >>>>
> >>>> if (is_neoverse)
> >>>> // treat L4 as llc
> >>>> else
> >>>> // treat L3 as llc
> >>> I personally think it's not good idea to distinguish platforms in the decoding code.
> >> I agree here. The more we talk about this, the more I'm wondering if we're
> >> spending too much code solving a problem that doesn't exist. I know of no
> >> Neoverse systems that actually have 4 cache levels, they all actually have three
> >> even though it's technically possible to have four.  I have some doubts anyone
> >> will actually build four levels of cache and perhaps the most prudent path here
> >> is to assume only three levels (and adjust the previous patch) until someone 
> >> actually produces a system with four levels instead of a lot of code that is
> >> never actually exercised?
> > I am not right person to say L4 cache is not implemented in Neoverse
> > platforms; my guess for a "System cache" data source might be L3 or
> > L4 and it is a implementation dependent.  Maybe German or Arm mates
> > could confirm for this.
> 
> I had a look at the TRMs for the N1[1], V1[2] and N2[3] Neoverse cores
> (specifically the LL_CACHE_RD pmu events). If we were to assign a number
> to the system cache (assuming all caches are implemented):
> 
> *For N1*, if L2 and L3 are implemented, system cache would follow at *L4*
To date no one has built 4 level though. Everyone has only built three.

> *For V1 and N2*, if L2 is implemented, system cache would follow at *L3*
> (these don't seem to have the same/similar per-cluster L3 cache from the N1)

And in the future they're not able to build >3. German and Leo if there aren't
strong objections I think the best path forward is for me to respin these
assuming only 3 levels and if someone builds 4 in a far-off-future we can always
change the implementation then. Agreed?

Thanks,
Ali
German Gomez March 15, 2022, 6:44 p.m. UTC | #7
On 14/03/2022 18:37, Ali Saidi wrote:
> Hi German and Leo,
>
> On   Mon, 14 Mar 2022 18:00:13 +0000, German Gomez wrote:
>> Hi Leo, Ali
>>
>> On 14/03/2022 06:33, Leo Yan wrote:
>>> On Sun, Mar 13, 2022 at 07:19:33PM +0000, Ali Saidi wrote:
>>>
>>> [...]
>>>
>>>>>>> +			if (lvl & P(LVL, L3) || lnum == P(LVLNUM, L4)) {
>>>>>> According to a comment in the previous patch, using L4 is specific to Neoverse, right?
>>>>>>
>>>>>> Maybe we need to distinguish the Neoverse case from the generic one here as well
>>>>>>
>>>>>> if (is_neoverse)
>>>>>> // treat L4 as llc
>>>>>> else
>>>>>> // treat L3 as llc
>>>>> I personally think it's not good idea to distinguish platforms in the decoding code.
>>>> I agree here. The more we talk about this, the more I'm wondering if we're
>>>> spending too much code solving a problem that doesn't exist. I know of no
>>>> Neoverse systems that actually have 4 cache levels, they all actually have three
>>>> even though it's technically possible to have four.  I have some doubts anyone
>>>> will actually build four levels of cache and perhaps the most prudent path here
>>>> is to assume only three levels (and adjust the previous patch) until someone 
>>>> actually produces a system with four levels instead of a lot of code that is
>>>> never actually exercised?
>>> I am not right person to say L4 cache is not implemented in Neoverse
>>> platforms; my guess for a "System cache" data source might be L3 or
>>> L4 and it is a implementation dependent.  Maybe German or Arm mates
>>> could confirm for this.
>> I had a look at the TRMs for the N1[1], V1[2] and N2[3] Neoverse cores
>> (specifically the LL_CACHE_RD pmu events). If we were to assign a number
>> to the system cache (assuming all caches are implemented):
>>
>> *For N1*, if L2 and L3 are implemented, system cache would follow at *L4*
> To date no one has built 4 level though. Everyone has only built three.

The N1SDP board advertises 4 levels (we use it regularly for testing perf patches)

| $ cat /sys/devices/system/cpu/cpu0/cache/index4/{level,shared_cpu_list}
| 4
| 0-3

Would it be a good idea to obtain the system cache level# from sysfs?

>> *For V1 and N2*, if L2 is implemented, system cache would follow at *L3*
>> (these don't seem to have the same/similar per-cluster L3 cache from the N1)
> And in the future they're not able to build >3. German and Leo if there aren't
> strong objections I think the best path forward is for me to respin these
> assuming only 3 levels and if someone builds 4 in a far-off-future we can always
> change the implementation then. Agreed?
>
> Thanks,
> Ali
>
German Gomez March 16, 2022, 11:43 a.m. UTC | #8
On 15/03/2022 18:44, German Gomez wrote:
> On 14/03/2022 18:37, Ali Saidi wrote:
>> Hi German and Leo,
>>
>> On   Mon, 14 Mar 2022 18:00:13 +0000, German Gomez wrote:
>>> Hi Leo, Ali
>>>
>>> On 14/03/2022 06:33, Leo Yan wrote:
>>>> On Sun, Mar 13, 2022 at 07:19:33PM +0000, Ali Saidi wrote:
>>>>
>>>> [...]
>>>>
>>>>>>>> +			if (lvl & P(LVL, L3) || lnum == P(LVLNUM, L4)) {
>>>>>>> According to a comment in the previous patch, using L4 is specific to Neoverse, right?
>>>>>>>
>>>>>>> Maybe we need to distinguish the Neoverse case from the generic one here as well
>>>>>>>
>>>>>>> if (is_neoverse)
>>>>>>> // treat L4 as llc
>>>>>>> else
>>>>>>> // treat L3 as llc
>>>>>> I personally think it's not good idea to distinguish platforms in the decoding code.
>>>>> I agree here. The more we talk about this, the more I'm wondering if we're
>>>>> spending too much code solving a problem that doesn't exist. I know of no
>>>>> Neoverse systems that actually have 4 cache levels, they all actually have three
>>>>> even though it's technically possible to have four.  I have some doubts anyone
>>>>> will actually build four levels of cache and perhaps the most prudent path here
>>>>> is to assume only three levels (and adjust the previous patch) until someone 
>>>>> actually produces a system with four levels instead of a lot of code that is
>>>>> never actually exercised?
>>>> I am not right person to say L4 cache is not implemented in Neoverse
>>>> platforms; my guess for a "System cache" data source might be L3 or
>>>> L4 and it is a implementation dependent.  Maybe German or Arm mates
>>>> could confirm for this.
>>> I had a look at the TRMs for the N1[1], V1[2] and N2[3] Neoverse cores
>>> (specifically the LL_CACHE_RD pmu events). If we were to assign a number
>>> to the system cache (assuming all caches are implemented):
>>>
>>> *For N1*, if L2 and L3 are implemented, system cache would follow at *L4*
>> To date no one has built 4 level though. Everyone has only built three.
> The N1SDP board advertises 4 levels (we use it regularly for testing perf patches)

That said, it's probably the odd one out.

I'm not against assuming 3 levels. Later if there's is a strong need for L4, indeed we can go back and change it.

Thanks,
German

>
> | $ cat /sys/devices/system/cpu/cpu0/cache/index4/{level,shared_cpu_list}
> | 4
> | 0-3
>
> Would it be a good idea to obtain the system cache level# from sysfs?
>
>>> *For V1 and N2*, if L2 is implemented, system cache would follow at *L3*
>>> (these don't seem to have the same/similar per-cluster L3 cache from the N1)
>> And in the future they're not able to build >3. German and Leo if there aren't
>> strong objections I think the best path forward is for me to respin these
>> assuming only 3 levels and if someone builds 4 in a far-off-future we can always
>> change the implementation then. Agreed?
>>
>> Thanks,
>> Ali
>>
Leo Yan March 16, 2022, 12:42 p.m. UTC | #9
On Wed, Mar 16, 2022 at 11:43:52AM +0000, German Gomez wrote:

[...]

> >>> I had a look at the TRMs for the N1[1], V1[2] and N2[3] Neoverse cores
> >>> (specifically the LL_CACHE_RD pmu events). If we were to assign a number
> >>> to the system cache (assuming all caches are implemented):
> >>>
> >>> *For N1*, if L2 and L3 are implemented, system cache would follow at *L4*
> >> To date no one has built 4 level though. Everyone has only built three.
> > The N1SDP board advertises 4 levels (we use it regularly for testing perf patches)
> 
> That said, it's probably the odd one out.
> 
> I'm not against assuming 3 levels. Later if there's is a strong need for L4, indeed we can go back and change it.

Thanks for the info.

For exploring cache hierarchy via sysFS is a good idea, the only one
concern for me is: can we simply take the system cache as the same
thing as the highest level cache?  If so, I think another option is to
define a cache level as "PERF_MEM_LVLNUM_SYSTEM_CACHE" and extend the
decoding code for support it.

With PERF_MEM_LVLNUM_SYSTEM_CACHE, it can tell users clearly the data
source from system cache, and users can easily map this info with the
cache media on the working platform.

In practice, I don't object to use cache level 3 at first step.  At
least this can meet the requirement at current stage.

Thanks,
Leo
German Gomez March 16, 2022, 3:10 p.m. UTC | #10
On 16/03/2022 12:42, Leo Yan wrote:
> On Wed, Mar 16, 2022 at 11:43:52AM +0000, German Gomez wrote:
>
> [...]
>
>>>>> I had a look at the TRMs for the N1[1], V1[2] and N2[3] Neoverse cores
>>>>> (specifically the LL_CACHE_RD pmu events). If we were to assign a number
>>>>> to the system cache (assuming all caches are implemented):
>>>>>
>>>>> *For N1*, if L2 and L3 are implemented, system cache would follow at *L4*
>>>> To date no one has built 4 level though. Everyone has only built three.
>>> The N1SDP board advertises 4 levels (we use it regularly for testing perf patches)
>> That said, it's probably the odd one out.
>>
>> I'm not against assuming 3 levels. Later if there's is a strong need for L4, indeed we can go back and change it.
> Thanks for the info.
>
> For exploring cache hierarchy via sysFS is a good idea, the only one
> concern for me is: can we simply take the system cache as the same
> thing as the highest level cache?  If so, I think another option is to

For Neoverse, it should be. LL_CACHE_RD pmu event says (if system cache is implemented):

* If CPUECTLR.EXTLLC is set: This event counts any cacheable read transaction which returns a data source of 'interconnect cache'.

> define a cache level as "PERF_MEM_LVLNUM_SYSTEM_CACHE" and extend the
> decoding code for support it.
>
> With PERF_MEM_LVLNUM_SYSTEM_CACHE, it can tell users clearly the data
> source from system cache, and users can easily map this info with the
> cache media on the working platform.
>
> In practice, I don't object to use cache level 3 at first step.  At
> least this can meet the requirement at current stage.

Ok, I agree. I think for now it is a good compromise.
Detecting the caches seems like an additional/separate perf feature.

Thanks,
German

> Thanks,
> Leo
diff mbox series

Patch

diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
index ed0ab838bcc5..6c3fd4aac7ae 100644
--- a/tools/perf/util/mem-events.c
+++ b/tools/perf/util/mem-events.c
@@ -485,6 +485,7 @@  int c2c_decode_stats(struct c2c_stats *stats, struct mem_info *mi)
 	u64 daddr  = mi->daddr.addr;
 	u64 op     = data_src->mem_op;
 	u64 lvl    = data_src->mem_lvl;
+	u64 lnum   = data_src->mem_lvl_num;
 	u64 snoop  = data_src->mem_snoop;
 	u64 lock   = data_src->mem_lock;
 	u64 blk    = data_src->mem_blk;
@@ -527,16 +528,18 @@  do {				\
 			if (lvl & P(LVL, UNC)) stats->ld_uncache++;
 			if (lvl & P(LVL, IO))  stats->ld_io++;
 			if (lvl & P(LVL, LFB)) stats->ld_fbhit++;
-			if (lvl & P(LVL, L1 )) stats->ld_l1hit++;
-			if (lvl & P(LVL, L2 )) stats->ld_l2hit++;
-			if (lvl & P(LVL, L3 )) {
+			if (lvl & P(LVL, L1) || lnum == P(LVLNUM, L1))
+				stats->ld_l1hit++;
+			if (lvl & P(LVL, L2) || lnum == P(LVLNUM, L2))
+				stats->ld_l2hit++;
+			if (lvl & P(LVL, L3) || lnum == P(LVLNUM, L4)) {
 				if (snoop & P(SNOOP, HITM))
 					HITM_INC(lcl_hitm);
 				else
 					stats->ld_llchit++;
 			}
 
-			if (lvl & P(LVL, LOC_RAM)) {
+			if (lvl & P(LVL, LOC_RAM) || lnum == P(LVLNUM, RAM)) {
 				stats->lcl_dram++;
 				if (snoop & P(SNOOP, HIT))
 					stats->ld_shared++;
@@ -564,6 +567,9 @@  do {				\
 				HITM_INC(rmt_hitm);
 		}
 
+		if (lnum == P(LVLNUM, ANY_CACHE) && snoop & P(SNOOP, HITM))
+			HITM_INC(lcl_hitm);
+
 		if ((lvl & P(LVL, MISS)))
 			stats->ld_miss++;