Message ID | 54FEFA23.7050307@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
> I think we could still solve this problem by deferring the 'context' > validation to the core. The PMUs could validate the group, within its > context. i.e, if it can accommodate its events as a group, during > event_init. The problem we face now, is encountering an event from a > different PMU, which we could leave it to the core as we do already. Good point: we're not reliant on other drivers because the core will still check the context. We only hope that those other drivers don't make similar mistakes and corrupt things. [...] > static int > -validate_event(struct pmu_hw_events *hw_events, > - struct perf_event *event) > +validate_event(struct pmu *pmu, struct pmu_hw_events *hw_events, > + struct perf_event *event) > { > - struct arm_pmu *armpmu = to_arm_pmu(event->pmu); > + struct arm_pmu *armpmu; > > if (is_software_event(event)) > return 1; > > + /* > + * We are only worried if we can accommodate the events > + * from this pmu in this group. > + */ > + if (event->pmu != pmu) > + return 1; It's better to explicitly reject this case. We know it's non-sensical and there's no point wasting any time on it. That will also make big.LITTLE support a bit nicer, whenever I get that under control -- big and LITTLE events could live in the same task context (so the core won't reject grouping them) but mustn't be in the same group (so we have to reject grouping in the backend). I'd still prefer the group validation being triggered explicitly by the core code, so that it's logically separate from initialising the event in isolation, but that's looking like a much bigger job, and I don't trust myself to correctly update every PMU driver for v4.0. For the moment let's clean up the commit message for the original patch. I'll add splitting group validation to my TODO list; there seems to be a slot free around 2035... Thanks, Mark.
diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c index 557e128..b3af19b 100644 --- a/arch/arm/kernel/perf_event.c +++ b/arch/arm/kernel/perf_event.c @@ -259,20 +259,28 @@ out: } static int -validate_event(struct pmu_hw_events *hw_events, - struct perf_event *event) +validate_event(struct pmu *pmu, struct pmu_hw_events *hw_events, + struct perf_event *event) { - struct arm_pmu *armpmu = to_arm_pmu(event->pmu); + struct arm_pmu *armpmu; if (is_software_event(event)) return 1; + /* + * We are only worried if we can accommodate the events + * from this pmu in this group. + */ + if (event->pmu != pmu) + return 1; + if (event->state < PERF_EVENT_STATE_OFF) return 1; if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec) return 1; + armpmu = to_arm_pmu(event->pmu); return armpmu->get_event_idx(hw_events, event) >= 0; } @@ -288,15 +296,15 @@ validate_group(struct perf_event *event) */ memset(&fake_pmu.used_mask, 0, sizeof(fake_pmu.used_mask)); - if (!validate_event(&fake_pmu, leader)) + if (!validate_event(event->pmu, &fake_pmu, leader)) return -EINVAL; list_for_each_entry(sibling, &leader->sibling_list, group_entry) { - if (!validate_event(&fake_pmu, sibling)) + if (!validate_event(event->pmu, &fake_pmu, sibling)) return -EINVAL; } - if (!validate_event(&fake_pmu, event)) + if (!validate_event(event->pmu, &fake_pmu, event)) return -EINVAL; return 0;