Message ID | 1512490399-94107-3-git-send-email-john.garry@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, Dec 06, 2017 at 12:13:16AM +0800, John Garry wrote: > For some architectures (like arm64), there are architecture- > defined recommended events. Vendors may not be obliged to > follow the recommendation and may implement their own pmu > event for a specific event cod I would just duplicate the architected events into the different vendor files. Then you wouldn't need all this mess. -Andi
On 05/12/2017 17:27, Andi Kleen wrote: > On Wed, Dec 06, 2017 at 12:13:16AM +0800, John Garry wrote: >> For some architectures (like arm64), there are architecture- >> defined recommended events. Vendors may not be obliged to >> follow the recommendation and may implement their own pmu >> event for a specific event cod > > I would just duplicate the architected events into the different > vendor files. Then you wouldn't need all this mess. > This is what we were originally doing: https://patchwork.kernel.org/patch/10010859/ But then we thought that we could avoid duplicating all these events for every platform from every vendor. Most, if not all, vendors will implement the events as recommended for any platform, so much unnecessary duplication. cheers, John > -Andi > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel > > . >
On Wed, Dec 06, 2017 at 12:13:16AM +0800, John Garry wrote: > For some architectures (like arm64), there are architecture- > defined recommended events. Vendors may not be obliged to > follow the recommendation and may implement their own pmu > event for a specific event code. > > This patch adds support for parsing events from arch-defined > recommended JSONs, and then fixing up vendor events when > they have implemented these events as recommended. in the previous patch you added the vendor support, so you have arch|vendor|platform key for the event list and perf have the most current/local event list why would you need to fix it? if there's new event list, the table gets updated, perf is rebuilt.. I'm clearly missing something ;-) > In the vendor JSON, to specify that the event is supported > according to the recommendation, only the event code is > added to the JSON entry - no other event elements need be > added, like below: > [ > { > "EventCode": "0x40", > }, > > ] > > The pmu event parsing will check for "BriefDescription" > field presence only for this. > > If "BriefDescription" is present, then it is implied > that the vendor has implemented their own custom event, > and there is no fixup. Other fields are ignored. if we are going this way, please use some new token, this list is supposed to be human readable thanks, jirka
On Wed, Dec 06, 2017 at 12:13:16AM +0800, John Garry wrote: SNIP > --- > tools/perf/pmu-events/jevents.c | 215 ++++++++++++++++++++++++++++++++++++---- > 1 file changed, 198 insertions(+), 17 deletions(-) > > diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c > index a0d489e..a820ed4 100644 > --- a/tools/perf/pmu-events/jevents.c > +++ b/tools/perf/pmu-events/jevents.c > @@ -42,6 +42,7 @@ > #include <dirent.h> > #include <sys/time.h> /* getrlimit */ > #include <sys/resource.h> /* getrlimit */ > +#include <sys/queue.h> > #include <ftw.h> > #include <sys/stat.h> > #include "jsmn.h" > @@ -366,6 +367,94 @@ static int print_events_table_entry(void *data, char *name, char *event, > return 0; > } > > +struct event_struct { > + char *name; > + char *event; > + char *desc; > + char *long_desc; > + char *pmu; > + char *unit; > + char *perpkg; > + char *metric_expr; > + char *metric_name; > + char *metric_group; > + LIST_ENTRY(event_struct) list; is there any reason you don't use the 'struct list_head' ? I dont think we want another thingie involved for lists jirka
On 06/12/2017 13:37, Jiri Olsa wrote: > On Wed, Dec 06, 2017 at 12:13:16AM +0800, John Garry wrote: > > SNIP > >> --- >> tools/perf/pmu-events/jevents.c | 215 ++++++++++++++++++++++++++++++++++++---- >> 1 file changed, 198 insertions(+), 17 deletions(-) >> >> diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c >> index a0d489e..a820ed4 100644 >> --- a/tools/perf/pmu-events/jevents.c >> +++ b/tools/perf/pmu-events/jevents.c >> @@ -42,6 +42,7 @@ >> #include <dirent.h> >> #include <sys/time.h> /* getrlimit */ >> #include <sys/resource.h> /* getrlimit */ >> +#include <sys/queue.h> >> #include <ftw.h> >> #include <sys/stat.h> >> #include "jsmn.h" >> @@ -366,6 +367,94 @@ static int print_events_table_entry(void *data, char *name, char *event, >> return 0; >> } >> >> +struct event_struct { >> + char *name; >> + char *event; >> + char *desc; >> + char *long_desc; >> + char *pmu; >> + char *unit; >> + char *perpkg; >> + char *metric_expr; >> + char *metric_name; >> + char *metric_group; >> + LIST_ENTRY(event_struct) list; > > is there any reason you don't use the 'struct list_head' ? > I dont think we want another thingie involved for lists Hi jirka, The linux kernel headers are not used for jevents tool. I would rather use them if possible... Thanks, John > > jirka > > . >
On 06/12/2017 13:36, Jiri Olsa wrote: > On Wed, Dec 06, 2017 at 12:13:16AM +0800, John Garry wrote: >> For some architectures (like arm64), there are architecture- >> defined recommended events. Vendors may not be obliged to >> follow the recommendation and may implement their own pmu >> event for a specific event code. >> >> This patch adds support for parsing events from arch-defined >> recommended JSONs, and then fixing up vendor events when >> they have implemented these events as recommended. > > in the previous patch you added the vendor support, so > you have arch|vendor|platform key for the event list > and perf have the most current/local event list > > why would you need to fix it? if there's new event list, > the table gets updated, perf is rebuilt.. I'm clearly > missing something ;-) The 2 patches are quite separate. In the first patch, I just added support for the vendor subdirectory. So this patch is not related to rebuilding when adding a new event list or dependency checking. Here we are trying to allow the vendor to just specify that an event is supported as standard in their platform, without duplicating all the standard event fields in their JSON. When processing the vendor JSONs, the jevents tool can figure which events are standard and create the proper event entries in the pmu events table, referencing the architecture JSON. > >> In the vendor JSON, to specify that the event is supported >> according to the recommendation, only the event code is >> added to the JSON entry - no other event elements need be >> added, like below: >> [ >> { >> "EventCode": "0x40", >> }, >> >> ] >> >> The pmu event parsing will check for "BriefDescription" >> field presence only for this. >> >> If "BriefDescription" is present, then it is implied >> that the vendor has implemented their own custom event, >> and there is no fixup. Other fields are ignored. > > if we are going this way, please use some new token, > this list is supposed to be human readable A new token could work also, but it would be just a flag to mark the event "standard". Ideally we could reference another entry in another JSON, like a pointer, but I don't think that this is possible with JSONs; not unless we introduce some elaborate custom scheme to allow JSONs to be cross-referenced. Cheers, John > > thanks, > jirka > > . >
On Wed, Dec 06, 2017 at 03:20:14PM +0000, John Garry wrote: > On 06/12/2017 13:36, Jiri Olsa wrote: > > On Wed, Dec 06, 2017 at 12:13:16AM +0800, John Garry wrote: > > > For some architectures (like arm64), there are architecture- > > > defined recommended events. Vendors may not be obliged to > > > follow the recommendation and may implement their own pmu > > > event for a specific event code. > > > > > > This patch adds support for parsing events from arch-defined > > > recommended JSONs, and then fixing up vendor events when > > > they have implemented these events as recommended. > > > > in the previous patch you added the vendor support, so > > you have arch|vendor|platform key for the event list > > and perf have the most current/local event list > > > > why would you need to fix it? if there's new event list, > > the table gets updated, perf is rebuilt.. I'm clearly > > missing something ;-) > > The 2 patches are quite separate. In the first patch, I just added support > for the vendor subdirectory. > > So this patch is not related to rebuilding when adding a new event list or > dependency checking. > > Here we are trying to allow the vendor to just specify that an event is > supported as standard in their platform, without duplicating all the > standard event fields in their JSON. When processing the vendor JSONs, the > jevents tool can figure which events are standard and create the proper > event entries in the pmu events table, referencing the architecture JSON. I think we should keep this simple and mangle this with some pointer logic now you have arch/vendor/platform directory structure.. why don't you add events for every such directory? I understand there will be duplications, but we already have them for other archs and it's not big deal: [jolsa@krava perf]$ grep -r L2_RQSTS.DEMAND_DATA_RD_MISS pmu-events/arch/* pmu-events/arch/x86/broadwell/cache.json: "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", pmu-events/arch/x86/haswell/cache.json: "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", pmu-events/arch/x86/broadwellde/cache.json: "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", pmu-events/arch/x86/haswellx/cache.json: "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", pmu-events/arch/x86/skylake/cache.json: "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", pmu-events/arch/x86/skylakex/cache.json: "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", pmu-events/arch/x86/broadwellx/cache.json: "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", thanks, jirka
On Wed, Dec 06, 2017 at 02:40:10PM +0000, John Garry wrote: > On 06/12/2017 13:37, Jiri Olsa wrote: > > On Wed, Dec 06, 2017 at 12:13:16AM +0800, John Garry wrote: > > > > SNIP > > > > > --- > > > tools/perf/pmu-events/jevents.c | 215 ++++++++++++++++++++++++++++++++++++---- > > > 1 file changed, 198 insertions(+), 17 deletions(-) > > > > > > diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c > > > index a0d489e..a820ed4 100644 > > > --- a/tools/perf/pmu-events/jevents.c > > > +++ b/tools/perf/pmu-events/jevents.c > > > @@ -42,6 +42,7 @@ > > > #include <dirent.h> > > > #include <sys/time.h> /* getrlimit */ > > > #include <sys/resource.h> /* getrlimit */ > > > +#include <sys/queue.h> > > > #include <ftw.h> > > > #include <sys/stat.h> > > > #include "jsmn.h" > > > @@ -366,6 +367,94 @@ static int print_events_table_entry(void *data, char *name, char *event, > > > return 0; > > > } > > > > > > +struct event_struct { > > > + char *name; > > > + char *event; > > > + char *desc; > > > + char *long_desc; > > > + char *pmu; > > > + char *unit; > > > + char *perpkg; > > > + char *metric_expr; > > > + char *metric_name; > > > + char *metric_group; > > > + LIST_ENTRY(event_struct) list; > > > > is there any reason you don't use the 'struct list_head' ? > > I dont think we want another thingie involved for lists > > Hi jirka, > > The linux kernel headers are not used for jevents tool. I would rather use > them if possible... should be as easy as adding #include <linux/list.h> ;-) it's heavily used within perf and I'm pretty sure we want to keep around just one way of doing lists thanks, jirka
On 08/12/2017 12:31, Jiri Olsa wrote: > On Wed, Dec 06, 2017 at 02:40:10PM +0000, John Garry wrote: >> On 06/12/2017 13:37, Jiri Olsa wrote: >>> On Wed, Dec 06, 2017 at 12:13:16AM +0800, John Garry wrote: >>> >>> SNIP >>> >>>> --- >>>> tools/perf/pmu-events/jevents.c | 215 ++++++++++++++++++++++++++++++++++++---- >>>> 1 file changed, 198 insertions(+), 17 deletions(-) >>>> >>>> diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c >>>> index a0d489e..a820ed4 100644 >>>> --- a/tools/perf/pmu-events/jevents.c >>>> +++ b/tools/perf/pmu-events/jevents.c >>>> @@ -42,6 +42,7 @@ >>>> #include <dirent.h> >>>> #include <sys/time.h> /* getrlimit */ >>>> #include <sys/resource.h> /* getrlimit */ >>>> +#include <sys/queue.h> >>>> #include <ftw.h> >>>> #include <sys/stat.h> >>>> #include "jsmn.h" >>>> @@ -366,6 +367,94 @@ static int print_events_table_entry(void *data, char *name, char *event, >>>> return 0; >>>> } >>>> >>>> +struct event_struct { >>>> + char *name; >>>> + char *event; >>>> + char *desc; >>>> + char *long_desc; >>>> + char *pmu; >>>> + char *unit; >>>> + char *perpkg; >>>> + char *metric_expr; >>>> + char *metric_name; >>>> + char *metric_group; >>>> + LIST_ENTRY(event_struct) list; >>> >>> is there any reason you don't use the 'struct list_head' ? >>> I dont think we want another thingie involved for lists >> >> Hi jirka, >> Hi jirka, >> The linux kernel headers are not used for jevents tool. I would rather use >> them if possible... > > should be as easy as adding #include <linux/list.h> ;-) > Hi jirka, I think the issue is that jevents is a "hostprogs", which does not use kernel headers. FWIW, here is the symptom: pmu-events/jevents.c:51:24: fatal error: linux/list.h: No such file or directory #include <linux/list.h> ^ compilation terminated. mv: cannot stat ‘pmu-events/.jevents.o.tmp’: No such file or directory perf tool build is different. Much appreciated, John > it's heavily used within perf and I'm pretty sure we want > to keep around just one way of doing lists > > thanks, > jirka > > . >
On 08/12/2017 12:29, Jiri Olsa wrote: > On Wed, Dec 06, 2017 at 03:20:14PM +0000, John Garry wrote: >> On 06/12/2017 13:36, Jiri Olsa wrote: >>> On Wed, Dec 06, 2017 at 12:13:16AM +0800, John Garry wrote: >>>> For some architectures (like arm64), there are architecture- >>>> defined recommended events. Vendors may not be obliged to >>>> follow the recommendation and may implement their own pmu >>>> event for a specific event code. >>>> >>>> This patch adds support for parsing events from arch-defined >>>> recommended JSONs, and then fixing up vendor events when >>>> they have implemented these events as recommended. >>> >>> in the previous patch you added the vendor support, so >>> you have arch|vendor|platform key for the event list >>> and perf have the most current/local event list >>> >>> why would you need to fix it? if there's new event list, >>> the table gets updated, perf is rebuilt.. I'm clearly >>> missing something ;-) >> >> The 2 patches are quite separate. In the first patch, I just added support >> for the vendor subdirectory. >> >> So this patch is not related to rebuilding when adding a new event list or >> dependency checking. >> >> Here we are trying to allow the vendor to just specify that an event is >> supported as standard in their platform, without duplicating all the >> standard event fields in their JSON. When processing the vendor JSONs, the >> jevents tool can figure which events are standard and create the proper >> event entries in the pmu events table, referencing the architecture JSON. > Hi jirka, > I think we should keep this simple and mangle this with some pointer logic > > now you have arch/vendor/platform directory structure.. I'm glad that there seems to be no objection to this, as I feel that this was a problem. why don't > you add events for every such directory? I understand there will > be duplications, but we already have them for other archs and it's > not big deal: The amount of duplication was the concern. As mentioned earlier, it would be anticipated that every vendor would implement these events as recommended, so a copy for every platform from every vendor. We're looking for a way to avoid this. Actually having a scalable JSON standard format for pmu events, which allows us to define common events per architecture / vendor and reference them per platform JSON could be useful. Here we're dealing with trade-off between duplication (simplicity) vs complexity (or over-engineering). > > [jolsa@krava perf]$ grep -r L2_RQSTS.DEMAND_DATA_RD_MISS pmu-events/arch/* > pmu-events/arch/x86/broadwell/cache.json: "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", > pmu-events/arch/x86/haswell/cache.json: "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", > pmu-events/arch/x86/broadwellde/cache.json: "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", > pmu-events/arch/x86/haswellx/cache.json: "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", > pmu-events/arch/x86/skylake/cache.json: "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", > pmu-events/arch/x86/skylakex/cache.json: "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", > pmu-events/arch/x86/broadwellx/cache.json: "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", > > thanks, > jirka Cheers, John > > . >
On Fri, Dec 08, 2017 at 03:42:10PM +0000, John Garry wrote: > On 08/12/2017 12:29, Jiri Olsa wrote: > > On Wed, Dec 06, 2017 at 03:20:14PM +0000, John Garry wrote: > > > On 06/12/2017 13:36, Jiri Olsa wrote: > > > > On Wed, Dec 06, 2017 at 12:13:16AM +0800, John Garry wrote: > > > > > For some architectures (like arm64), there are architecture- > > > > > defined recommended events. Vendors may not be obliged to > > > > > follow the recommendation and may implement their own pmu > > > > > event for a specific event code. > > > > > > > > > > This patch adds support for parsing events from arch-defined > > > > > recommended JSONs, and then fixing up vendor events when > > > > > they have implemented these events as recommended. > > > > > > > > in the previous patch you added the vendor support, so > > > > you have arch|vendor|platform key for the event list > > > > and perf have the most current/local event list > > > > > > > > why would you need to fix it? if there's new event list, > > > > the table gets updated, perf is rebuilt.. I'm clearly > > > > missing something ;-) > > > > > > The 2 patches are quite separate. In the first patch, I just added support > > > for the vendor subdirectory. > > > > > > So this patch is not related to rebuilding when adding a new event list or > > > dependency checking. > > > > > > Here we are trying to allow the vendor to just specify that an event is > > > supported as standard in their platform, without duplicating all the > > > standard event fields in their JSON. When processing the vendor JSONs, the > > > jevents tool can figure which events are standard and create the proper > > > event entries in the pmu events table, referencing the architecture JSON. > > > > Hi jirka, > > > I think we should keep this simple and mangle this with some pointer logic sry for confusion, of course it should have been '.. and NOT mangle..' ;-) > > > > now you have arch/vendor/platform directory structure.. > > I'm glad that there seems to be no objection to this, as I feel that this > was a problem. > > why don't > > you add events for every such directory? I understand there will > > be duplications, but we already have them for other archs and it's > > not big deal: > > The amount of duplication was the concern. As mentioned earlier, it would be > anticipated that every vendor would implement these events as recommended, > so a copy for every platform from every vendor. We're looking for a way to > avoid this. > > Actually having a scalable JSON standard format for pmu events, which allows > us to define common events per architecture / vendor and reference them per > platform JSON could be useful. > > Here we're dealing with trade-off between duplication (simplicity) vs > complexity (or over-engineering). understood, but as I said we already are ok with duplicates, if it's reasonable size as is for x86 now.. how much amount are we talking about for arm? jirka
On 09/12/2017 07:31, Jiri Olsa wrote: > On Fri, Dec 08, 2017 at 03:42:10PM +0000, John Garry wrote: >> On 08/12/2017 12:29, Jiri Olsa wrote: >>> On Wed, Dec 06, 2017 at 03:20:14PM +0000, John Garry wrote: >>>> On 06/12/2017 13:36, Jiri Olsa wrote: >>>>> On Wed, Dec 06, 2017 at 12:13:16AM +0800, John Garry wrote: >>>>>> For some architectures (like arm64), there are architecture- >>>>>> defined recommended events. Vendors may not be obliged to >>>>>> follow the recommendation and may implement their own pmu >>>>>> event for a specific event code. >>>>>> >>>>>> This patch adds support for parsing events from arch-defined >>>>>> recommended JSONs, and then fixing up vendor events when >>>>>> they have implemented these events as recommended. >>>>> >>>>> in the previous patch you added the vendor support, so >>>>> you have arch|vendor|platform key for the event list >>>>> and perf have the most current/local event list >>>>> >>>>> why would you need to fix it? if there's new event list, >>>>> the table gets updated, perf is rebuilt.. I'm clearly >>>>> missing something ;-) >>>> >>>> The 2 patches are quite separate. In the first patch, I just added support >>>> for the vendor subdirectory. >>>> >>>> So this patch is not related to rebuilding when adding a new event list or >>>> dependency checking. >>>> >>>> Here we are trying to allow the vendor to just specify that an event is >>>> supported as standard in their platform, without duplicating all the >>>> standard event fields in their JSON. When processing the vendor JSONs, the >>>> jevents tool can figure which events are standard and create the proper >>>> event entries in the pmu events table, referencing the architecture JSON. >>> >> >> Hi jirka, >> >>> I think we should keep this simple and mangle this with some pointer logic > > sry for confusion, of course it should have been '.. and NOT mangle..' ;-) > >>> >>> now you have arch/vendor/platform directory structure.. >> >> I'm glad that there seems to be no objection to this, as I feel that this >> was a problem. >> >> why don't >>> you add events for every such directory? I understand there will >>> be duplications, but we already have them for other archs and it's >>> not big deal: >> >> The amount of duplication was the concern. As mentioned earlier, it would be >> anticipated that every vendor would implement these events as recommended, >> so a copy for every platform from every vendor. We're looking for a way to >> avoid this. >> >> Actually having a scalable JSON standard format for pmu events, which allows >> us to define common events per architecture / vendor and reference them per >> platform JSON could be useful. >> >> Here we're dealing with trade-off between duplication (simplicity) vs >> complexity (or over-engineering). > > understood, but as I said we already are ok with duplicates, > if it's reasonable size as is for x86 now.. how much amount > are we talking about for arm? > Hi jirka, These JSONs would only apply to vendors which have custom ARMv8 implementations. If you check the ARMv8 ARM, there's 10 such companies recorded as ARMv8 implementators. So this means that in the future we could have tens to hundreds of JSONs for arm64, all with these duplicated events. At this point I'll ask Will Deacon to share his thoughts, as he originally requested this feature. Thanks, John > jirka > > . >
>> Actually having a scalable JSON standard format for pmu events, which allows >> us to define common events per architecture / vendor and reference them per >> platform JSON could be useful. >> >> Here we're dealing with trade-off between duplication (simplicity) vs >> complexity (or over-engineering). > > understood, but as I said we already are ok with duplicates, > if it's reasonable size as is for x86 now.. how much amount > are we talking about for arm? > Hi Jirka, When you say reasonable size for x86, I ran a string duplication finder on the x86 JSONs and the results show a huge amount of duplication. Please check this: https://gist.githubusercontent.com/johnpgarry/68bc87e823ae2ce0f7b475b4e55e5795/raw/f4cea138999d8b34151b9586d733592e01774d7a/x86%2520JSON%2520duplication Extract: "Found a 65 line (311 tokens) duplication in the following files: Starting at line 100 of /linux/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json Starting at line 100 of /linux/tools/perf/pmu-events/arch/x86/ivytown/ivt-metrics.json Starting at line 100 of /linux/tools/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json Starting at line 100 of /linux/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json Starting at line 76 of /linux/tools/perf/pmu-events/arch/x86/jaketown/jkt-metrics.json Starting at line 100 of /linux/tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json Starting at line 76 of /linux/tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json Starting at line 100 of /linux/tools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json" Won't this all potentially have a big maintainence cost? For example, I saw multiple JSON update patches which look identical: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/tools/perf/pmu-events/arch/x86?h=v4.15-rc3&id=7347bba5552f479d4292ffd008d18d41a965f021 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/tools/perf/pmu-events/arch/x86?h=v4.15-rc3&id=984d91f4c62f64026cbfef51f609971025934cec I just don't know how this schema scales with more archs and more platforms supported. It's just early days now... Regards, John > jirka > > . >
> Won't this all potentially have a big maintainence cost? No. It's all auto generated. The only cost is slightly bigger binary size. I would hope your event files are auto generated too. > I just don't know how this schema scales with more archs and more platforms > supported. It's just early days now... Only perf will get slightly bigger, but memory is not exactly expensive. In fact, the extra memory won't even be faulted in if it's not used, so it's only disk space. -Andi
On Fri, Dec 15, 2017 at 11:22:50AM +0000, John Garry wrote: > > > Actually having a scalable JSON standard format for pmu events, which allows > > > us to define common events per architecture / vendor and reference them per > > > platform JSON could be useful. > > > > > > Here we're dealing with trade-off between duplication (simplicity) vs > > > complexity (or over-engineering). > > > > understood, but as I said we already are ok with duplicates, > > if it's reasonable size as is for x86 now.. how much amount > > are we talking about for arm? > > > > Hi Jirka, > > When you say reasonable size for x86, I ran a string duplication finder on > the x86 JSONs and the results show a huge amount of duplication. Please > check this: > https://gist.githubusercontent.com/johnpgarry/68bc87e823ae2ce0f7b475b4e55e5795/raw/f4cea138999d8b34151b9586d733592e01774d7a/x86%2520JSON%2520duplication > > Extract: > "Found a 65 line (311 tokens) duplication in the following files: > Starting at line 100 of > /linux/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json > Starting at line 100 of > /linux/tools/perf/pmu-events/arch/x86/ivytown/ivt-metrics.json > Starting at line 100 of > /linux/tools/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json > Starting at line 100 of > /linux/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json > Starting at line 76 of > /linux/tools/perf/pmu-events/arch/x86/jaketown/jkt-metrics.json > Starting at line 100 of > /linux/tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json > Starting at line 76 of > /linux/tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json > Starting at line 100 of > /linux/tools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json" > > Won't this all potentially have a big maintainence cost? as Andi said it's mostly just the disk space, which is not big deal I'm not doing JSON file updates, but I think having simple single dir for platform/cpu could save us some confusion in future however I won't oppose if you want to add this logic, but please: - use the list_head ;-) - leave the process_one_file function simple and separate the level0 processing - you are using 'EventCode' as an unique ID to find the base, but it's not unique for x86, you'll need to add some other ID scheme that fits to all archs thanks, jirka > > For example, I saw multiple JSON update patches which look identical: > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/tools/perf/pmu-events/arch/x86?h=v4.15-rc3&id=7347bba5552f479d4292ffd008d18d41a965f021 > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/tools/perf/pmu-events/arch/x86?h=v4.15-rc3&id=984d91f4c62f64026cbfef51f609971025934cec > > I just don't know how this schema scales with more archs and more platforms > supported. It's just early days now... > > Regards, > John > > > jirka > > > > . > > > >
On 16/12/2017 18:47, Andi Kleen wrote: Hi Andi, >> Won't this all potentially have a big maintainence cost? > > No. It's all auto generated. > > The only cost is slightly bigger binary size. > > I would hope your event files are auto generated too. > No, they're not - we are just manually transcribing the event data from the architecture reference manual (for arch-defined events) or internal text document (for custom events). Can you describe how you autogenerate the JSONs? Do you have some internal proprietary HW file format describing events, with files supplied from HW designer, which you can just translate into a JSON? Would the files support deferencing events to improve scalability? >> I just don't know how this schema scales with more archs and more platforms >> supported. It's just early days now... > > Only perf will get slightly bigger, but memory is not exactly expensive. > > In fact, the extra memory won't even be faulted in if it's not used, > so it's only disk space. Much appreciated, John > > -Andi > > . >
> Can you describe how you autogenerate the JSONs? Do you have some internal > proprietary HW file format describing events, with files supplied from HW > designer, which you can just translate into a JSON? Would the files support > deferencing events to improve scalability? For Intel JSON is an official format, which is maintained for each CPU. It is automatically generated from an internal database https://download.01.org/perfmon/ I have some python scripts to convert these Intel JSONs into the perf format (which has some additional headers, and is split into different categories, and add metrics). They have some Intel specifics, so may not be useful for you. There's no support for dereference, each CPU gets its own unique file. But you could do the a merge simply with the attached script which merges two JSON files. -Andi #!/usr/bin/python # merge json event files # merge-json file1.json file2... > merged.json import sys import json all = [] for fn in sys.argv[1:]: jf = json.load(open(fn)) for n in jf: all.append(n) print json.dumps(all, sort_keys=True, indent=4, separators=(',', ': '))
On 02/01/2018 17:48, Andi Kleen wrote: >> Can you describe how you autogenerate the JSONs? Do you have some internal >> proprietary HW file format describing events, with files supplied from HW >> designer, which you can just translate into a JSON? Would the files support >> deferencing events to improve scalability? > > For Intel JSON is an official format, which is maintained for each CPU. > It is automatically generated from an internal database > https://download.01.org/perfmon/ > > I have some python scripts to convert these Intel JSONs into the perf > format (which has some additional headers, and is split into > different categories, and add metrics). OK, understood. Unfortunately I could not see such a database being maintained for ARM implementors. > > They have some Intel specifics, so may not be useful for you. > > There's no support for dereference, each CPU gets its own unique file. Right. > > But you could do the a merge simply with the attached script which merges > two JSON files. I assume that you're talking about simply merging the micro architecture and the platform specific event JSONS at build time. If yes, this would not work for us: - the microarchitecture JSON would contain definitions of all events, but there is no architectural method to check if they are implemented - we need to deal with scenario of non-standard event implementations But I could update the script to deal with this and add to the build (Jirka looked to be ok with the same in jevents, albeit a few caveats). All the best, John > > -Andi >
On 21/12/2017 19:39, Jiri Olsa wrote: >> Hi Jirka, >> > >> > When you say reasonable size for x86, I ran a string duplication finder on >> > the x86 JSONs and the results show a huge amount of duplication. Please >> > check this: >> > https://gist.githubusercontent.com/johnpgarry/68bc87e823ae2ce0f7b475b4e55e5795/raw/f4cea138999d8b34151b9586d733592e01774d7a/x86%2520JSON%2520duplication >> > >> > Extract: >> > "Found a 65 line (311 tokens) duplication in the following files: >> > Starting at line 100 of >> > /linux/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json >> > Starting at line 100 of >> > /linux/tools/perf/pmu-events/arch/x86/ivytown/ivt-metrics.json >> > Starting at line 100 of >> > /linux/tools/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json >> > Starting at line 100 of >> > /linux/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json >> > Starting at line 76 of >> > /linux/tools/perf/pmu-events/arch/x86/jaketown/jkt-metrics.json >> > Starting at line 100 of >> > /linux/tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json >> > Starting at line 76 of >> > /linux/tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json >> > Starting at line 100 of >> > /linux/tools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json" >> > Hi Jirka, Sorry for the slow reply. >> > Won't this all potentially have a big maintainence cost? > as Andi said it's mostly just the disk space, > which is not big deal > > I'm not doing JSON file updates, but I think having > simple single dir for platform/cpu could save us some > confusion in future Understood. But for ARM, which has very standardised architecture events, it is good to reduce this event duplication between platforms. > > however I won't oppose if you want to add this logic, > but please: > - use the list_head ;-) Of course > - leave the process_one_file function simple > and separate the level0 processing ok, this is how it should look already, albeit a couple of process_one_file() modifications. I'll re-check this. > - you are using 'EventCode' as an unique ID to find > the base, but it's not unique for x86, you'll need > to add some other ID scheme that fits to all archs Right, so you mentioned earlier using a new keyword token to identify whether we use the standard event, so we can go his way - ok? I would also like to mention at this point why I did the event pre-processing in jevents, and not a separate script: - current build does not transverse the arch tree - tree transversal for JSON processing is done in jevents - a script would mean derived objects, which means: - makefile changes for derived objects - jevents would have to deal with derived objects - jevents already has support for JSON processing The advantage of using a script is that we keep the JSON processing in jevents simple. All the best, John > > thanks, > jirka >
On Thu, Jan 04, 2018 at 05:17:56PM +0000, John Garry wrote: SNIP > > > Hi Jirka, > > Sorry for the slow reply. np, just got back from holidays anyway ;-) > > > > > Won't this all potentially have a big maintainence cost? > > as Andi said it's mostly just the disk space, > > which is not big deal > > > > I'm not doing JSON file updates, but I think having > > simple single dir for platform/cpu could save us some > > confusion in future > > Understood. But for ARM, which has very standardised architecture events, it > is good to reduce this event duplication between platforms. > > > > > however I won't oppose if you want to add this logic, > > but please: > > - use the list_head ;-) > > Of course > > > - leave the process_one_file function simple > > and separate the level0 processing > > ok, this is how it should look already, albeit a couple of > process_one_file() modifications. I'll re-check this. > > > - you are using 'EventCode' as an unique ID to find > > the base, but it's not unique for x86, you'll need > > to add some other ID scheme that fits to all archs > > Right, so you mentioned earlier using a new keyword token to identify > whether we use the standard event, so we can go his way - ok? yes, something like that > I would also like to mention at this point why I did the event > pre-processing in jevents, and not a separate script: > - current build does not transverse the arch tree > - tree transversal for JSON processing is done in jevents > - a script would mean derived objects, which means: > - makefile changes for derived objects > - jevents would have to deal with derived objects > - jevents already has support for JSON processing > > The advantage of using a script is that we keep the JSON processing in > jevents simple. I don't mind the extra functionality in jevents as long as the current one keeps on working and the new one works for all archs ;-) thanks, jirka
diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c index a0d489e..a820ed4 100644 --- a/tools/perf/pmu-events/jevents.c +++ b/tools/perf/pmu-events/jevents.c @@ -42,6 +42,7 @@ #include <dirent.h> #include <sys/time.h> /* getrlimit */ #include <sys/resource.h> /* getrlimit */ +#include <sys/queue.h> #include <ftw.h> #include <sys/stat.h> #include "jsmn.h" @@ -366,6 +367,94 @@ static int print_events_table_entry(void *data, char *name, char *event, return 0; } +struct event_struct { + char *name; + char *event; + char *desc; + char *long_desc; + char *pmu; + char *unit; + char *perpkg; + char *metric_expr; + char *metric_name; + char *metric_group; + LIST_ENTRY(event_struct) list; + char strings[]; +}; + +LIST_HEAD(listhead, event_struct) recommended_events; + +static int save_recommended_events(void *data, char *name, char *event, + char *desc, char *long_desc, + char *pmu, char *unit, char *perpkg, + char *metric_expr, + char *metric_name, char *metric_group) +{ + static int count = 0; + char temp[1024]; + struct event_struct *es; + struct stat *sb = data; + int len = 0; + char *strings; + + /* + * Lazily allocate size of the JSON file to hold the + * strings, which would be more than large enough. + */ + len = sb->st_size; + + es = malloc(sizeof(*es) + len); + if (!es) + return -ENOMEM; + memset(es, 0, sizeof(*es)); + LIST_INSERT_HEAD(&recommended_events, es, list); + + strings = &es->strings[0]; + + if (name) { + es->name = strings; + strings += snprintf(strings, len, "%s", name) + 1; + } + if (event) { + es->event = strings; + strings += snprintf(strings, len, "%s", event) + 1; + } + if (desc) { + es->desc = strings; + strings += snprintf(strings, len, "%s", desc) + 1; + } + if (long_desc) { + es->long_desc = strings; + strings += snprintf(strings, len, "%s", long_desc) + 1; + } + if (pmu) { + es->pmu = strings; + strings += snprintf(strings, len, "%s", pmu) + 1; + } + if (unit) { + es->unit = strings; + strings += snprintf(strings, len, "%s", unit) + 1; + } + if (perpkg) { + es->perpkg = strings; + strings += snprintf(strings, len, "%s", perpkg) + 1; + } + if (metric_expr) { + es->metric_expr = strings; + strings += snprintf(strings, len, "%s", metric_expr) + 1; + } + if (metric_name) { + es->metric_name = strings; + strings += snprintf(strings, len, "%s", metric_name) + 1; + } + if (metric_group) { + es->metric_group = strings; + strings += snprintf(strings, len, "%s", metric_group) + 1; + } + + return 0; +} + static void print_events_table_suffix(FILE *outfp) { fprintf(outfp, "{\n"); @@ -407,6 +496,61 @@ static char *real_event(const char *name, char *event) return event; } +static void fixup_field(char *from, char **to) +{ + /* + * If we already had a valid pointer (string), then + * don't allocate a new one, just reuse and overwrite. + */ + if (!*to) + *to = malloc(strlen(from)); + + strcpy(*to, from); +} + +static int try_fixup(const char *fn, char *event, char **desc, char **name, char **long_desc, char **pmu, char **filter, + char **perpkg, char **unit, char **metric_expr, char **metric_name, char **metric_group) +{ + /* try to find matching event from recommended values */ + struct event_struct *es; + + LIST_FOREACH(es, &recommended_events, list) { + if (!strcmp(event, es->event)) { + /* now fixup */ + if (es->desc) + fixup_field(es->desc, desc); + if (es->name) + fixup_field(es->name, name); + if (es->long_desc) + fixup_field(es->long_desc, long_desc); + if (es->pmu) + fixup_field(es->pmu, pmu); + // if (event_struct->filter) + // fixup_field(event_struct->filter, filter); + if (es->perpkg) + fixup_field(es->perpkg, perpkg); + if (es->unit) + fixup_field(es->unit, unit); + if (es->metric_expr) + fixup_field(es->metric_expr, metric_expr); + if (es->metric_name) + fixup_field(es->metric_name, metric_name); + if (es->metric_group) + fixup_field(es->metric_group, metric_group); + + return 0; + } + } + + pr_err("%s: could not find matching %s for %s\n", prog, event, fn); + return -1; +} + +#define FREE_MEMORIES \ + free(event); free(desc); free(name); free(long_desc); \ + free(extra_desc); free(pmu); free(filter); free(perpkg); \ + free(unit); free(metric_expr); free(metric_name); + /* Call func with each event in the json file */ int json_events(const char *fn, int (*func)(void *data, char *name, char *event, char *desc, @@ -551,20 +695,22 @@ int json_events(const char *fn, if (name) fixname(name); + if (!desc) { + /* + * If we have no valid desc, then fixup *all* values from recommended + * by matching the event. + */ + err = try_fixup(fn, event, &desc, &name, &long_desc, &pmu, &filter, &perpkg, &unit, &metric_expr, + &metric_name, &metric_group); + if (err) { + FREE_MEMORIES + goto out_free; + } + } + err = func(data, name, real_event(name, event), desc, long_desc, pmu, unit, perpkg, metric_expr, metric_name, metric_group); - free(event); - free(desc); - free(name); - free(long_desc); - free(extra_desc); - free(pmu); - free(filter); - free(perpkg); - free(unit); - free(metric_expr); - free(metric_name); - free(metric_group); + FREE_MEMORIES if (err) break; tok += j; @@ -776,6 +922,32 @@ static int isLeafDir(const char *fpath) return res; } +static int isJsonFile(const char *name) +{ + const char *suffix; + + if (strlen(name) < 5) + return 0; + + suffix = name + strlen(name) - 5; + + if (strncmp(suffix, ".json", 5) == 0) + return 1; + return 0; +} + +static int preprocess_level0_files(const char *fpath, const struct stat *sb, + int typeflag, struct FTW *ftwbuf) +{ + int level = ftwbuf->level; + int is_file = typeflag == FTW_F; + + if (level == 1 && is_file && isJsonFile(fpath)) + return json_events(fpath, save_recommended_events, (void *)sb); + + return 0; +} + static int process_one_file(const char *fpath, const struct stat *sb, int typeflag, struct FTW *ftwbuf) { @@ -806,8 +978,10 @@ static int process_one_file(const char *fpath, const struct stat *sb, level, sb->st_size, bname, fpath); /* base dir */ - if (level == 0) - return 0; + if (level == 0) { + LIST_INIT(&recommended_events); + return nftw(fpath, preprocess_level0_files, get_maxfds(), 0); + } /* model directory, reset topic */ if (level == 1 && is_dir && isLeafDir(fpath)) { @@ -869,9 +1043,7 @@ static int process_one_file(const char *fpath, const struct stat *sb, * ignore it. It could be a readme.txt for instance. */ if (is_file) { - char *suffix = bname + strlen(bname) - 5; - - if (strncmp(suffix, ".json", 5)) { + if (!isJsonFile(bname)) { pr_info("%s: Ignoring file without .json suffix %s\n", prog, fpath); return 0; @@ -933,6 +1105,7 @@ int main(int argc, char *argv[]) const char *output_file; const char *start_dirname; struct stat stbuf; + struct event_struct *es1, *es2; prog = basename(argv[0]); if (argc < 4) { @@ -988,6 +1161,14 @@ int main(int argc, char *argv[]) goto empty_map; } + /* Free struct for recommended events */ + es1 = LIST_FIRST(&recommended_events); + while (es1) { + es2 = LIST_NEXT(es1, list); + free(es1); + es1 = es2; + } + if (close_table) print_events_table_suffix(eventsfp);
For some architectures (like arm64), there are architecture- defined recommended events. Vendors may not be obliged to follow the recommendation and may implement their own pmu event for a specific event code. This patch adds support for parsing events from arch-defined recommended JSONs, and then fixing up vendor events when they have implemented these events as recommended. In the vendor JSON, to specify that the event is supported according to the recommendation, only the event code is added to the JSON entry - no other event elements need be added, like below: [ { "EventCode": "0x40", }, ] The pmu event parsing will check for "BriefDescription" field presence only for this. If "BriefDescription" is present, then it is implied that the vendor has implemented their own custom event, and there is no fixup. Other fields are ignored. *TODO: update documentation Signed-off-by: John Garry <john.garry@huawei.com> --- tools/perf/pmu-events/jevents.c | 215 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 198 insertions(+), 17 deletions(-)