Message ID | 20240103031409.2504051-12-dapeng1.mi@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | pmu test bugs fix and improvements | expand |
diff --git a/x86/pmu.c b/x86/pmu.c index c8d4a0dcd362..d5c3fcfaa84c 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -172,6 +172,16 @@ static void adjust_events_range(struct pmu_event *gp_events, int branch_idx) gp_events[branch_idx].min = PRECISE_LOOP_BRANCHES; gp_events[branch_idx].max = PRECISE_LOOP_BRANCHES; } + + /* + * If HW supports IBPB, one branch miss is forced to trigger by + * IBPB command. Thus overwrite the lower boundary of branch misses + * event to 1. + */ + if (has_ibpb()) { + /* branch misses event */ + gp_events[branch_idx + 1].min = 1; + } } volatile uint64_t irq_received;
Since IBPB command is already leveraged to force one branch miss triggering, the lower boundary of branch misses event can be set to 1 instead of 0 on IBPB supported processors. Thus the ambiguity from 0 can be eliminated. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> --- x86/pmu.c | 10 ++++++++++ 1 file changed, 10 insertions(+)