Message ID | 1594368010-4419-2-git-send-email-amit.kachhap@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: add Armv8.3 pointer authentication enhancements | expand |
On Fri, Jul 10, 2020 at 01:30:07PM +0530, Amit Daniel Kachhap wrote: > Some Armv8.3 Pointer Authentication enhancements have been introduced > which are mandatory for Armv8.6 and optional for ARMv8.3. These features > are, > > * ARMv8.3-PAuth2 - An enhanced PAC generation logic is added which hardens > finding the correct PAC value of the authenticated pointer. > > * ARMv8.3-FPAC - Fault is generated now when the ptrauth authentication > instruction fails in authenticating the PAC present in the address. > This is different from earlier case when such failures just adds an > error code in the top byte and waits for subsequent load/store to abort. > The ptrauth instructions which may cause this fault are autiasp, retaa > etc. Did we add anything for armv8.5 EnhancedPAC? I think that just changes some detail of the signing/authentication algorithms, rather than adding new insns or exceptions. So maybe there was nothing to add anyway. > The above features are now represented by additional configurations > for the Address Authentication cpufeature. > > The userspace fault received in the kernel due to ARMv8.3-FPAC is treated > as Illegal instruction and hence signal SIGILL is injected with ILL_ILLOPN > as the signal code. Note that this is different from earlier ARMv8.3 > ptrauth where signal SIGSEGV is issued due to Pointer authentication > failures. The in-kernel PAC fault causes kernel to crash. Presumably this breaks code that checks pointers using AUT* but doesn't attempt to dereference them if the check fails, perhaps calling some userspace callback to handle it or throwing an exception. Since the point of the architecture though is that you don't need to do an explicit check, maybe code doesn't do this in practice. Checking codesearch.debian.net or similar for explicit AUT* instructions could be good idea. If this is a concern we might have to stop reporting HWCAP_PACA and HWCAP_PACG, and define new hwcaps. That seems brutal, though. > > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> > --- > Changes since v3: > * Removed ptrauth fault handler from 32 bit compat layer handler. > * Added a comment for in-kernel PAC fault kernel crash. > * Commit logs cleanup. > > arch/arm64/include/asm/esr.h | 4 +++- > arch/arm64/include/asm/exception.h | 1 + > arch/arm64/include/asm/sysreg.h | 24 ++++++++++++++++-------- > arch/arm64/kernel/entry-common.c | 21 +++++++++++++++++++++ > arch/arm64/kernel/traps.c | 10 ++++++++++ > 5 files changed, 51 insertions(+), 9 deletions(-) > > diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h > index 035003acfa87..22c81f1edda2 100644 > --- a/arch/arm64/include/asm/esr.h > +++ b/arch/arm64/include/asm/esr.h > @@ -35,7 +35,9 @@ > #define ESR_ELx_EC_SYS64 (0x18) > #define ESR_ELx_EC_SVE (0x19) > #define ESR_ELx_EC_ERET (0x1a) /* EL2 only */ > -/* Unallocated EC: 0x1b - 0x1E */ > +/* Unallocated EC: 0x1B */ > +#define ESR_ELx_EC_FPAC (0x1C) /* EL1 and above */ > +/* Unallocated EC: 0x1D - 0x1E */ > #define ESR_ELx_EC_IMP_DEF (0x1f) /* EL3 only */ > #define ESR_ELx_EC_IABT_LOW (0x20) > #define ESR_ELx_EC_IABT_CUR (0x21) > diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h > index 7577a754d443..3140b90f2e4f 100644 > --- a/arch/arm64/include/asm/exception.h > +++ b/arch/arm64/include/asm/exception.h > @@ -47,4 +47,5 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr); > void do_cp15instr(unsigned int esr, struct pt_regs *regs); > void do_el0_svc(struct pt_regs *regs); > void do_el0_svc_compat(struct pt_regs *regs); > +void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr); > #endif /* __ASM_EXCEPTION_H */ > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h > index 463175f80341..c71bcd0c002a 100644 > --- a/arch/arm64/include/asm/sysreg.h > +++ b/arch/arm64/include/asm/sysreg.h > @@ -633,14 +633,22 @@ > #define ID_AA64ISAR1_APA_SHIFT 4 > #define ID_AA64ISAR1_DPB_SHIFT 0 > > -#define ID_AA64ISAR1_APA_NI 0x0 > -#define ID_AA64ISAR1_APA_ARCHITECTED 0x1 > -#define ID_AA64ISAR1_API_NI 0x0 > -#define ID_AA64ISAR1_API_IMP_DEF 0x1 > -#define ID_AA64ISAR1_GPA_NI 0x0 > -#define ID_AA64ISAR1_GPA_ARCHITECTED 0x1 > -#define ID_AA64ISAR1_GPI_NI 0x0 > -#define ID_AA64ISAR1_GPI_IMP_DEF 0x1 > +#define ID_AA64ISAR1_APA_NI 0x0 > +#define ID_AA64ISAR1_APA_ARCHITECTED 0x1 > +#define ID_AA64ISAR1_APA_ARCH_EPAC 0x2 > +#define ID_AA64ISAR1_APA_ARCH_EPAC2 0x3 > +#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC 0x4 > +#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC_CMB 0x5 > +#define ID_AA64ISAR1_API_NI 0x0 > +#define ID_AA64ISAR1_API_IMP_DEF 0x1 > +#define ID_AA64ISAR1_API_IMP_DEF_EPAC 0x2 > +#define ID_AA64ISAR1_API_IMP_DEF_EPAC2 0x3 > +#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC 0x4 > +#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC_CMB 0x5 > +#define ID_AA64ISAR1_GPA_NI 0x0 > +#define ID_AA64ISAR1_GPA_ARCHITECTED 0x1 > +#define ID_AA64ISAR1_GPI_NI 0x0 > +#define ID_AA64ISAR1_GPI_IMP_DEF 0x1 > > /* id_aa64pfr0 */ > #define ID_AA64PFR0_CSV3_SHIFT 60 > diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c > index 3dbdf9752b11..8380ffb8160a 100644 > --- a/arch/arm64/kernel/entry-common.c > +++ b/arch/arm64/kernel/entry-common.c > @@ -66,6 +66,13 @@ static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr) > } > NOKPROBE_SYMBOL(el1_dbg); > > +static void notrace el1_fpac(struct pt_regs *regs, unsigned long esr) > +{ > + local_daif_inherit(regs); > + do_ptrauth_fault(regs, esr); > +} > +NOKPROBE_SYMBOL(el1_fpac); > + > asmlinkage void notrace el1_sync_handler(struct pt_regs *regs) > { > unsigned long esr = read_sysreg(esr_el1); > @@ -92,6 +99,9 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs) > case ESR_ELx_EC_BRK64: > el1_dbg(regs, esr); > break; > + case ESR_ELx_EC_FPAC: > + el1_fpac(regs, esr); > + break; > default: > el1_inv(regs, esr); > } > @@ -227,6 +237,14 @@ static void notrace el0_svc(struct pt_regs *regs) > } > NOKPROBE_SYMBOL(el0_svc); > > +static void notrace el0_fpac(struct pt_regs *regs, unsigned long esr) > +{ > + user_exit_irqoff(); > + local_daif_restore(DAIF_PROCCTX); > + do_ptrauth_fault(regs, esr); > +} > +NOKPROBE_SYMBOL(el0_fpac); > + > asmlinkage void notrace el0_sync_handler(struct pt_regs *regs) > { > unsigned long esr = read_sysreg(esr_el1); > @@ -272,6 +290,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs) > case ESR_ELx_EC_BRK64: > el0_dbg(regs, esr); > break; > + case ESR_ELx_EC_FPAC: > + el0_fpac(regs, esr); > + break; > default: > el0_inv(regs, esr); > } > diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c > index 47f651df781c..0f1e587393a0 100644 > --- a/arch/arm64/kernel/traps.c > +++ b/arch/arm64/kernel/traps.c > @@ -479,6 +479,15 @@ void do_bti(struct pt_regs *regs) > } > NOKPROBE_SYMBOL(do_bti); > > +void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr) > +{ > + /* In-kernel Pointer authentication fault causes kernel crash */ Doesn't arm64_notify_die() already dump a backtrace and kill the task? I wonder whether force_signal_inject() is a reasonable thing to use here. This is used in some similar places such as do_undefinstr(), do_bti() etc. force_signal_inject() might need some minor tweaking to handle this case. > + BUG_ON(!user_mode(regs)); We need to kill the thread, but doesn't arm64_notify_die() already do that? (Don't take my word for it though...) > + arm64_notify_die("pointer authentication fault", regs, SIGILL, > + ILL_ILLOPN, (void __user *)regs->pc, esr); > +} > +NOKPROBE_SYMBOL(do_ptrauth_fault); > + > #define __user_cache_maint(insn, address, res) \ > if (address >= user_addr_max()) { \ > res = -EFAULT; \ > @@ -775,6 +784,7 @@ static const char *esr_class_str[] = { > [ESR_ELx_EC_SYS64] = "MSR/MRS (AArch64)", > [ESR_ELx_EC_SVE] = "SVE", > [ESR_ELx_EC_ERET] = "ERET/ERETAA/ERETAB", > + [ESR_ELx_EC_FPAC] = "FPAC", > [ESR_ELx_EC_IMP_DEF] = "EL3 IMP DEF", > [ESR_ELx_EC_IABT_LOW] = "IABT (lower EL)", > [ESR_ELx_EC_IABT_CUR] = "IABT (current EL)", > -- > 2.17.1 > > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Hi, On 7/29/20 4:37 PM, Dave Martin wrote: > On Fri, Jul 10, 2020 at 01:30:07PM +0530, Amit Daniel Kachhap wrote: >> Some Armv8.3 Pointer Authentication enhancements have been introduced >> which are mandatory for Armv8.6 and optional for ARMv8.3. These features >> are, >> >> * ARMv8.3-PAuth2 - An enhanced PAC generation logic is added which hardens >> finding the correct PAC value of the authenticated pointer. >> >> * ARMv8.3-FPAC - Fault is generated now when the ptrauth authentication >> instruction fails in authenticating the PAC present in the address. >> This is different from earlier case when such failures just adds an >> error code in the top byte and waits for subsequent load/store to abort. >> The ptrauth instructions which may cause this fault are autiasp, retaa >> etc. > > Did we add anything for armv8.5 EnhancedPAC? > > I think that just changes some detail of the signing/authentication > algorithms, rather than adding new insns or exceptions. So maybe there > was nothing to add anyway. Yes agreed. There are only small stuffs added here, cpufeature configurations and new exception class code. I will change commit log to reflect this. > > >> The above features are now represented by additional configurations >> for the Address Authentication cpufeature. >> >> The userspace fault received in the kernel due to ARMv8.3-FPAC is treated >> as Illegal instruction and hence signal SIGILL is injected with ILL_ILLOPN >> as the signal code. Note that this is different from earlier ARMv8.3 >> ptrauth where signal SIGSEGV is issued due to Pointer authentication >> failures. The in-kernel PAC fault causes kernel to crash. > > Presumably this breaks code that checks pointers using AUT* but doesn't > attempt to dereference them if the check fails, perhaps calling some > userspace callback to handle it or throwing an exception. Should we worry about it? EPAC is essentially added to not make the intermediate result of authenticate made available as it will impact the security of key or PAC algorithm. > > Since the point of the architecture though is that you don't need to do > an explicit check, maybe code doesn't do this in practice. Checking > codesearch.debian.net or similar for explicit AUT* instructions could be > good idea. I will check if I can find any. > > If this is a concern we might have to stop reporting HWCAP_PACA and > HWCAP_PACG, and define new hwcaps. That seems brutal, though. ok. > >> >> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> >> --- >> Changes since v3: >> * Removed ptrauth fault handler from 32 bit compat layer handler. >> * Added a comment for in-kernel PAC fault kernel crash. >> * Commit logs cleanup. >> >> arch/arm64/include/asm/esr.h | 4 +++- >> arch/arm64/include/asm/exception.h | 1 + >> arch/arm64/include/asm/sysreg.h | 24 ++++++++++++++++-------- >> arch/arm64/kernel/entry-common.c | 21 +++++++++++++++++++++ >> arch/arm64/kernel/traps.c | 10 ++++++++++ >> 5 files changed, 51 insertions(+), 9 deletions(-) >> default: >> el0_inv(regs, esr); >> } >> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c >> index 47f651df781c..0f1e587393a0 100644 >> --- a/arch/arm64/kernel/traps.c >> +++ b/arch/arm64/kernel/traps.c >> @@ -479,6 +479,15 @@ void do_bti(struct pt_regs *regs) >> } >> NOKPROBE_SYMBOL(do_bti); >> >> +void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr) >> +{ >> + /* In-kernel Pointer authentication fault causes kernel crash */ > > Doesn't arm64_notify_die() already dump a backtrace and kill the task? > I wonder whether force_signal_inject() is a reasonable thing to use > here. This is used in some similar places such as do_undefinstr(), > do_bti() etc. Ok i will make it similar to do_bti(). It will be clear that way. > > force_signal_inject() might need some minor tweaking to handle this > case. > >> + BUG_ON(!user_mode(regs)); > > We need to kill the thread, but doesn't arm64_notify_die() already do > that? Yes some other caller of arm64_notify_die do not use BUG_ON for kernel. Thanks, Amit > > (Don't take my word for it though...) > >> + arm64_notify_die("pointer authentication fault", regs, SIGILL, >> + ILL_ILLOPN, (void __user *)regs->pc, esr); >> +} >> +NOKPROBE_SYMBOL(do_ptrauth_fault); >> + >> #define __user_cache_maint(insn, address, res) \ >> if (address >= user_addr_max()) { \ >> res = -EFAULT; \ >> @@ -775,6 +784,7 @@ static const char *esr_class_str[] = { >> [ESR_ELx_EC_SYS64] = "MSR/MRS (AArch64)", >> [ESR_ELx_EC_SVE] = "SVE", >> [ESR_ELx_EC_ERET] = "ERET/ERETAA/ERETAB", >> + [ESR_ELx_EC_FPAC] = "FPAC", >> [ESR_ELx_EC_IMP_DEF] = "EL3 IMP DEF", >> [ESR_ELx_EC_IABT_LOW] = "IABT (lower EL)", >> [ESR_ELx_EC_IABT_CUR] = "IABT (current EL)", >> -- >> 2.17.1 >> >> >> _______________________________________________ >> linux-arm-kernel mailing list >> linux-arm-kernel@lists.infradead.org >> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index 035003acfa87..22c81f1edda2 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -35,7 +35,9 @@ #define ESR_ELx_EC_SYS64 (0x18) #define ESR_ELx_EC_SVE (0x19) #define ESR_ELx_EC_ERET (0x1a) /* EL2 only */ -/* Unallocated EC: 0x1b - 0x1E */ +/* Unallocated EC: 0x1B */ +#define ESR_ELx_EC_FPAC (0x1C) /* EL1 and above */ +/* Unallocated EC: 0x1D - 0x1E */ #define ESR_ELx_EC_IMP_DEF (0x1f) /* EL3 only */ #define ESR_ELx_EC_IABT_LOW (0x20) #define ESR_ELx_EC_IABT_CUR (0x21) diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h index 7577a754d443..3140b90f2e4f 100644 --- a/arch/arm64/include/asm/exception.h +++ b/arch/arm64/include/asm/exception.h @@ -47,4 +47,5 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr); void do_cp15instr(unsigned int esr, struct pt_regs *regs); void do_el0_svc(struct pt_regs *regs); void do_el0_svc_compat(struct pt_regs *regs); +void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr); #endif /* __ASM_EXCEPTION_H */ diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 463175f80341..c71bcd0c002a 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -633,14 +633,22 @@ #define ID_AA64ISAR1_APA_SHIFT 4 #define ID_AA64ISAR1_DPB_SHIFT 0 -#define ID_AA64ISAR1_APA_NI 0x0 -#define ID_AA64ISAR1_APA_ARCHITECTED 0x1 -#define ID_AA64ISAR1_API_NI 0x0 -#define ID_AA64ISAR1_API_IMP_DEF 0x1 -#define ID_AA64ISAR1_GPA_NI 0x0 -#define ID_AA64ISAR1_GPA_ARCHITECTED 0x1 -#define ID_AA64ISAR1_GPI_NI 0x0 -#define ID_AA64ISAR1_GPI_IMP_DEF 0x1 +#define ID_AA64ISAR1_APA_NI 0x0 +#define ID_AA64ISAR1_APA_ARCHITECTED 0x1 +#define ID_AA64ISAR1_APA_ARCH_EPAC 0x2 +#define ID_AA64ISAR1_APA_ARCH_EPAC2 0x3 +#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC 0x4 +#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC_CMB 0x5 +#define ID_AA64ISAR1_API_NI 0x0 +#define ID_AA64ISAR1_API_IMP_DEF 0x1 +#define ID_AA64ISAR1_API_IMP_DEF_EPAC 0x2 +#define ID_AA64ISAR1_API_IMP_DEF_EPAC2 0x3 +#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC 0x4 +#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC_CMB 0x5 +#define ID_AA64ISAR1_GPA_NI 0x0 +#define ID_AA64ISAR1_GPA_ARCHITECTED 0x1 +#define ID_AA64ISAR1_GPI_NI 0x0 +#define ID_AA64ISAR1_GPI_IMP_DEF 0x1 /* id_aa64pfr0 */ #define ID_AA64PFR0_CSV3_SHIFT 60 diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index 3dbdf9752b11..8380ffb8160a 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -66,6 +66,13 @@ static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr) } NOKPROBE_SYMBOL(el1_dbg); +static void notrace el1_fpac(struct pt_regs *regs, unsigned long esr) +{ + local_daif_inherit(regs); + do_ptrauth_fault(regs, esr); +} +NOKPROBE_SYMBOL(el1_fpac); + asmlinkage void notrace el1_sync_handler(struct pt_regs *regs) { unsigned long esr = read_sysreg(esr_el1); @@ -92,6 +99,9 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs) case ESR_ELx_EC_BRK64: el1_dbg(regs, esr); break; + case ESR_ELx_EC_FPAC: + el1_fpac(regs, esr); + break; default: el1_inv(regs, esr); } @@ -227,6 +237,14 @@ static void notrace el0_svc(struct pt_regs *regs) } NOKPROBE_SYMBOL(el0_svc); +static void notrace el0_fpac(struct pt_regs *regs, unsigned long esr) +{ + user_exit_irqoff(); + local_daif_restore(DAIF_PROCCTX); + do_ptrauth_fault(regs, esr); +} +NOKPROBE_SYMBOL(el0_fpac); + asmlinkage void notrace el0_sync_handler(struct pt_regs *regs) { unsigned long esr = read_sysreg(esr_el1); @@ -272,6 +290,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs) case ESR_ELx_EC_BRK64: el0_dbg(regs, esr); break; + case ESR_ELx_EC_FPAC: + el0_fpac(regs, esr); + break; default: el0_inv(regs, esr); } diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 47f651df781c..0f1e587393a0 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -479,6 +479,15 @@ void do_bti(struct pt_regs *regs) } NOKPROBE_SYMBOL(do_bti); +void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr) +{ + /* In-kernel Pointer authentication fault causes kernel crash */ + BUG_ON(!user_mode(regs)); + arm64_notify_die("pointer authentication fault", regs, SIGILL, + ILL_ILLOPN, (void __user *)regs->pc, esr); +} +NOKPROBE_SYMBOL(do_ptrauth_fault); + #define __user_cache_maint(insn, address, res) \ if (address >= user_addr_max()) { \ res = -EFAULT; \ @@ -775,6 +784,7 @@ static const char *esr_class_str[] = { [ESR_ELx_EC_SYS64] = "MSR/MRS (AArch64)", [ESR_ELx_EC_SVE] = "SVE", [ESR_ELx_EC_ERET] = "ERET/ERETAA/ERETAB", + [ESR_ELx_EC_FPAC] = "FPAC", [ESR_ELx_EC_IMP_DEF] = "EL3 IMP DEF", [ESR_ELx_EC_IABT_LOW] = "IABT (lower EL)", [ESR_ELx_EC_IABT_CUR] = "IABT (current EL)",
Some Armv8.3 Pointer Authentication enhancements have been introduced which are mandatory for Armv8.6 and optional for ARMv8.3. These features are, * ARMv8.3-PAuth2 - An enhanced PAC generation logic is added which hardens finding the correct PAC value of the authenticated pointer. * ARMv8.3-FPAC - Fault is generated now when the ptrauth authentication instruction fails in authenticating the PAC present in the address. This is different from earlier case when such failures just adds an error code in the top byte and waits for subsequent load/store to abort. The ptrauth instructions which may cause this fault are autiasp, retaa etc. The above features are now represented by additional configurations for the Address Authentication cpufeature. The userspace fault received in the kernel due to ARMv8.3-FPAC is treated as Illegal instruction and hence signal SIGILL is injected with ILL_ILLOPN as the signal code. Note that this is different from earlier ARMv8.3 ptrauth where signal SIGSEGV is issued due to Pointer authentication failures. The in-kernel PAC fault causes kernel to crash. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since v3: * Removed ptrauth fault handler from 32 bit compat layer handler. * Added a comment for in-kernel PAC fault kernel crash. * Commit logs cleanup. arch/arm64/include/asm/esr.h | 4 +++- arch/arm64/include/asm/exception.h | 1 + arch/arm64/include/asm/sysreg.h | 24 ++++++++++++++++-------- arch/arm64/kernel/entry-common.c | 21 +++++++++++++++++++++ arch/arm64/kernel/traps.c | 10 ++++++++++ 5 files changed, 51 insertions(+), 9 deletions(-)