Message ID | 20190528150320.25953-6-raphael.gault@arm.com (mailing list archive) |
---|---|
State | RFC |
Headers | show |
Series | arm64: Enable access to pmu registers by user-space | expand |
On Tue, May 28, 2019 at 04:03:18PM +0100, Raphael Gault wrote: > +static int emulate_pmu(struct pt_regs *regs, u32 insn) > +{ > + u32 sys_reg, rt; > + u32 pmuserenr; > + > + sys_reg = (u32)aarch64_insn_decode_immediate(AARCH64_INSN_IMM_16, insn) << 5; > + rt = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RT, insn); > + pmuserenr = read_sysreg(pmuserenr_el0); > + > + if ((pmuserenr & (ARMV8_PMU_USERENR_ER|ARMV8_PMU_USERENR_CR)) != > + (ARMV8_PMU_USERENR_ER|ARMV8_PMU_USERENR_CR)) > + return -EINVAL; > + I would really prefer there to be a comment here that explain how the '0' value works. Maybe something like: /* * Userspace is expected to only use this in the context of the * scheme described in the struct perf_event_mmap_page comments. * * Given that context, we can only get here if we got migrated * between getting the register index and doing the MSR read. * This in turn implies we'll fail the sequence and retry, so * any value returned is 'good', all we need is to be non-fatal. */ > + pt_regs_write_reg(regs, rt, 0); And given the above, we don't even need to do this, we can simply preserve whatever garbage was in the register and return to userspace. The only thing we really need is for the trap to be non-fatal. > + > + arm64_skip_faulting_instruction(regs, 4); > + return 0; > +} > + > +/* > + * This hook will only be triggered by mrs > + * instructions on PMU registers. This is mandatory > + * in order to have a consistent behaviour even on > + * big.LITTLE systems. > + */ > +static struct undef_hook pmu_hook = { > + .instr_mask = 0xffff8800, > + .instr_val = 0xd53b8800, > + .fn = emulate_pmu, > +}; > + > +static int __init enable_pmu_emulation(void) > +{ > + register_undef_hook(&pmu_hook); > + return 0; > +} > + > +core_initcall(enable_pmu_emulation);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 2b807f129e60..daa7b31f2c73 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2166,8 +2166,8 @@ static int emulate_mrs(struct pt_regs *regs, u32 insn) } static struct undef_hook mrs_hook = { - .instr_mask = 0xfff00000, - .instr_val = 0xd5300000, + .instr_mask = 0xffff0000, + .instr_val = 0xd5380000, .pstate_mask = PSR_AA32_MODE_MASK, .pstate_val = PSR_MODE_EL0t, .fn = emulate_mrs, diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 3dc1265540df..1687f6d1fa27 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -19,9 +19,11 @@ * along with this program. If not, see <http://www.gnu.org/licenses/>. */ +#include <asm/cpu.h> #include <asm/irq_regs.h> #include <asm/perf_event.h> #include <asm/sysreg.h> +#include <asm/traps.h> #include <asm/virt.h> #include <linux/acpi.h> @@ -1009,6 +1011,45 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu) return probe.present ? 0 : -ENODEV; } +static int emulate_pmu(struct pt_regs *regs, u32 insn) +{ + u32 sys_reg, rt; + u32 pmuserenr; + + sys_reg = (u32)aarch64_insn_decode_immediate(AARCH64_INSN_IMM_16, insn) << 5; + rt = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RT, insn); + pmuserenr = read_sysreg(pmuserenr_el0); + + if ((pmuserenr & (ARMV8_PMU_USERENR_ER|ARMV8_PMU_USERENR_CR)) != + (ARMV8_PMU_USERENR_ER|ARMV8_PMU_USERENR_CR)) + return -EINVAL; + + pt_regs_write_reg(regs, rt, 0); + + arm64_skip_faulting_instruction(regs, 4); + return 0; +} + +/* + * This hook will only be triggered by mrs + * instructions on PMU registers. This is mandatory + * in order to have a consistent behaviour even on + * big.LITTLE systems. + */ +static struct undef_hook pmu_hook = { + .instr_mask = 0xffff8800, + .instr_val = 0xd53b8800, + .fn = emulate_pmu, +}; + +static int __init enable_pmu_emulation(void) +{ + register_undef_hook(&pmu_hook); + return 0; +} + +core_initcall(enable_pmu_emulation); + static int armv8_pmu_init(struct arm_pmu *cpu_pmu) { int ret = armv8pmu_probe_pmu(cpu_pmu);
In order to prevent the userspace processes which are trying to access the registers from the pmu registers on a big.LITTLE environment we introduce a hook to handle undefined instructions. The goal here is to prevent the process to be interrupted by a signal when the error is caused by the task being scheduled while accessing a counter, causing the counter access to be invalid. As we are not able to know efficiently the number of counters available physically on both pmu in that context we consider that any faulting access to a counter which is architecturally correct should not cause a SIGILL signal if the permissions are set accordingly. This commit also modifies the mask of the mrs_hook declared in arch/arm64/kernel/cpufeatures.c which emulates only feature register access. This is necessary because this hook's mask was too large and thus masking any mrs instruction, even if not related to the emulated registers which made the pmu emulation inefficient. Signed-off-by: Raphael Gault <raphael.gault@arm.com> --- arch/arm64/kernel/cpufeature.c | 4 ++-- arch/arm64/kernel/perf_event.c | 41 ++++++++++++++++++++++++++++++++++ 2 files changed, 43 insertions(+), 2 deletions(-)