Message ID | 7hd2f5icat.fsf@paris.lan (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote: > Christopher Covington <cov@codeaurora.org> writes: > > On 05/22/2014 03:27 PM, Larry Bassel wrote: > >> Make calls to ct_user_enter when the kernel is exited > >> and ct_user_exit when the kernel is entered (in el0_da, > >> el0_ia, el0_svc, el0_irq and all of the "error" paths). > >> > >> These macros expand to function calls which will only work > >> properly if el0_sync and related code has been rearranged > >> (in a previous patch of this series). > >> > >> The calls to ct_user_exit are made after hw debugging has been > >> enabled (enable_dbg_and_irq). > >> > >> The call to ct_user_enter is made at the beginning of the > >> kernel_exit macro. > >> > >> This patch is based on earlier work by Kevin Hilman. > >> Save/restore optimizations were also done by Kevin. > > > >> --- a/arch/arm64/kernel/entry.S > >> +++ b/arch/arm64/kernel/entry.S > >> @@ -30,6 +30,44 @@ > >> #include <asm/unistd32.h> > >> > >> /* > >> + * Context tracking subsystem. Used to instrument transitions > >> + * between user and kernel mode. > >> + */ > >> + .macro ct_user_exit, restore = 0 > >> +#ifdef CONFIG_CONTEXT_TRACKING > >> + bl context_tracking_user_exit > >> + .if \restore == 1 > >> + /* > >> + * Save/restore needed during syscalls. Restore syscall arguments from > >> + * the values already saved on stack during kernel_entry. > >> + */ > >> + ldp x0, x1, [sp] > >> + ldp x2, x3, [sp, #S_X2] > >> + ldp x4, x5, [sp, #S_X4] > >> + ldp x6, x7, [sp, #S_X6] > >> + .endif > >> +#endif > >> + .endm > >> + > >> + .macro ct_user_enter, save = 0 > >> +#ifdef CONFIG_CONTEXT_TRACKING > >> + .if \save == 1 > >> + /* > >> + * Save/restore only needed on syscall fastpath, which uses > >> + * x0-x2. > >> + */ > >> + push x2, x3 > > > > Why is x3 saved? > > I'll respond here since I worked with Larry on the context save/restore > part. > > [insert rather embarassing disclamer of ignorance of arm64 assembly] > > Based on my reading of the code, I figured only x0-x2 needed to be > saved. However, based on some experiments with intentionally clobbering > the registers[1] (as suggested by Mark Rutland) in order to make sure > we're saving/restoring the right things, I discovered x3 was needed too > (I missed updating the comment to mention x0-x3.) > > Maybe Will/Catalin/Mark R. can shed some light here? I haven't checked all the code paths but at least for pushing onto the stack we must keep it 16-bytes aligned (architecture requirement).
On Fri, May 23, 2014 at 03:51:07PM +0100, Catalin Marinas wrote: > On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote: > > Christopher Covington <cov@codeaurora.org> writes: > > > On 05/22/2014 03:27 PM, Larry Bassel wrote: > > >> Make calls to ct_user_enter when the kernel is exited > > >> and ct_user_exit when the kernel is entered (in el0_da, > > >> el0_ia, el0_svc, el0_irq and all of the "error" paths). > > >> > > >> These macros expand to function calls which will only work > > >> properly if el0_sync and related code has been rearranged > > >> (in a previous patch of this series). > > >> > > >> The calls to ct_user_exit are made after hw debugging has been > > >> enabled (enable_dbg_and_irq). > > >> > > >> The call to ct_user_enter is made at the beginning of the > > >> kernel_exit macro. > > >> > > >> This patch is based on earlier work by Kevin Hilman. > > >> Save/restore optimizations were also done by Kevin. > > > > > >> --- a/arch/arm64/kernel/entry.S > > >> +++ b/arch/arm64/kernel/entry.S > > >> @@ -30,6 +30,44 @@ > > >> #include <asm/unistd32.h> > > >> > > >> /* > > >> + * Context tracking subsystem. Used to instrument transitions > > >> + * between user and kernel mode. > > >> + */ > > >> + .macro ct_user_exit, restore = 0 > > >> +#ifdef CONFIG_CONTEXT_TRACKING > > >> + bl context_tracking_user_exit > > >> + .if \restore == 1 > > >> + /* > > >> + * Save/restore needed during syscalls. Restore syscall arguments from > > >> + * the values already saved on stack during kernel_entry. > > >> + */ > > >> + ldp x0, x1, [sp] > > >> + ldp x2, x3, [sp, #S_X2] > > >> + ldp x4, x5, [sp, #S_X4] > > >> + ldp x6, x7, [sp, #S_X6] > > >> + .endif > > >> +#endif > > >> + .endm > > >> + > > >> + .macro ct_user_enter, save = 0 > > >> +#ifdef CONFIG_CONTEXT_TRACKING > > >> + .if \save == 1 > > >> + /* > > >> + * Save/restore only needed on syscall fastpath, which uses > > >> + * x0-x2. > > >> + */ > > >> + push x2, x3 > > > > > > Why is x3 saved? > > > > I'll respond here since I worked with Larry on the context save/restore > > part. > > > > [insert rather embarassing disclamer of ignorance of arm64 assembly] > > > > Based on my reading of the code, I figured only x0-x2 needed to be > > saved. However, based on some experiments with intentionally clobbering > > the registers[1] (as suggested by Mark Rutland) in order to make sure > > we're saving/restoring the right things, I discovered x3 was needed too > > (I missed updating the comment to mention x0-x3.) > > > > Maybe Will/Catalin/Mark R. can shed some light here? > > I haven't checked all the code paths but at least for pushing onto the > stack we must keep it 16-bytes aligned (architecture requirement). Sure -- if modifying the stack we need to push/pop pairs of registers to keep it aligned. It might be better to use xzr as the dummy value in that case to make it clear that the value doesn't really matter. That said, ct_user_enter is only called in kernel_exit before we restore the values off the stack, and the only register I can spot that we need to preserve is x0 for the syscall return value. I can't see x1 or x2 being used any more specially than the rest of the remaining registers. Am I missing something, or would it be sufficient to do the following? push x0, xzr bl context_tacking_user_enter pop x0, xzr Cheers, Mark.
On Fri, May 23, 2014 at 04:55:44PM +0100, Mark Rutland wrote: > On Fri, May 23, 2014 at 03:51:07PM +0100, Catalin Marinas wrote: > > On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote: > > I haven't checked all the code paths but at least for pushing onto the > > stack we must keep it 16-bytes aligned (architecture requirement). > > Sure -- if modifying the stack we need to push/pop pairs of registers to > keep it aligned. It might be better to use xzr as the dummy value in > that case to make it clear that the value doesn't really matter. > > That said, ct_user_enter is only called in kernel_exit before we restore > the values off the stack, and the only register I can spot that we need > to preserve is x0 for the syscall return value. I can't see x1 or x2 > being used any more specially than the rest of the remaining registers. > Am I missing something, or would it be sufficient to do the following? > > push x0, xzr > bl context_tacking_user_enter > pop x0, xzr ... and if that works, then why are we using the stack instead of a callee-saved register? Will
Mark Rutland <mark.rutland@arm.com> writes: > On Fri, May 23, 2014 at 03:51:07PM +0100, Catalin Marinas wrote: >> On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote: >> > Christopher Covington <cov@codeaurora.org> writes: >> > > On 05/22/2014 03:27 PM, Larry Bassel wrote: >> > >> Make calls to ct_user_enter when the kernel is exited >> > >> and ct_user_exit when the kernel is entered (in el0_da, >> > >> el0_ia, el0_svc, el0_irq and all of the "error" paths). >> > >> >> > >> These macros expand to function calls which will only work >> > >> properly if el0_sync and related code has been rearranged >> > >> (in a previous patch of this series). >> > >> >> > >> The calls to ct_user_exit are made after hw debugging has been >> > >> enabled (enable_dbg_and_irq). >> > >> >> > >> The call to ct_user_enter is made at the beginning of the >> > >> kernel_exit macro. >> > >> >> > >> This patch is based on earlier work by Kevin Hilman. >> > >> Save/restore optimizations were also done by Kevin. >> > > >> > >> --- a/arch/arm64/kernel/entry.S >> > >> +++ b/arch/arm64/kernel/entry.S >> > >> @@ -30,6 +30,44 @@ >> > >> #include <asm/unistd32.h> >> > >> >> > >> /* >> > >> + * Context tracking subsystem. Used to instrument transitions >> > >> + * between user and kernel mode. >> > >> + */ >> > >> + .macro ct_user_exit, restore = 0 >> > >> +#ifdef CONFIG_CONTEXT_TRACKING >> > >> + bl context_tracking_user_exit >> > >> + .if \restore == 1 >> > >> + /* >> > >> + * Save/restore needed during syscalls. Restore syscall arguments from >> > >> + * the values already saved on stack during kernel_entry. >> > >> + */ >> > >> + ldp x0, x1, [sp] >> > >> + ldp x2, x3, [sp, #S_X2] >> > >> + ldp x4, x5, [sp, #S_X4] >> > >> + ldp x6, x7, [sp, #S_X6] >> > >> + .endif >> > >> +#endif >> > >> + .endm >> > >> + >> > >> + .macro ct_user_enter, save = 0 >> > >> +#ifdef CONFIG_CONTEXT_TRACKING >> > >> + .if \save == 1 >> > >> + /* >> > >> + * Save/restore only needed on syscall fastpath, which uses >> > >> + * x0-x2. >> > >> + */ >> > >> + push x2, x3 >> > > >> > > Why is x3 saved? >> > >> > I'll respond here since I worked with Larry on the context save/restore >> > part. >> > >> > [insert rather embarassing disclamer of ignorance of arm64 assembly] >> > >> > Based on my reading of the code, I figured only x0-x2 needed to be >> > saved. However, based on some experiments with intentionally clobbering >> > the registers[1] (as suggested by Mark Rutland) in order to make sure >> > we're saving/restoring the right things, I discovered x3 was needed too >> > (I missed updating the comment to mention x0-x3.) >> > >> > Maybe Will/Catalin/Mark R. can shed some light here? >> >> I haven't checked all the code paths but at least for pushing onto the >> stack we must keep it 16-bytes aligned (architecture requirement). > > Sure -- if modifying the stack we need to push/pop pairs of registers to > keep it aligned. It might be better to use xzr as the dummy value in > that case to make it clear that the value doesn't really matter. > > That said, ct_user_enter is only called in kernel_exit before we restore > the values off the stack, and the only register I can spot that we need > to preserve is x0 for the syscall return value. I can't see x1 or x2 > being used any more specially than the rest of the remaining registers. > Am I missing something, I don't think you're missing something. I had thought my experiment in clobbering registers uncovered that x1-x3 were also in use somewhere, but in trying to reproduce that now, it's clear only x0 is important. > or would it be sufficient to do the following? > push x0, xzr > bl context_tacking_user_enter > pop x0, xzr Yes, this seems to work. Following Will's suggestion of using a callee-saved register to save x0, the updated version now looks like this: .macro ct_user_enter, save = 0 #ifdef CONFIG_CONTEXT_TRACKING .if \save == 1 /* * We only have to save/restore x0 on the fast syscall path where * x0 contains the syscall return. */ mov x19, x0 .endif bl context_tracking_user_enter .if \save == 1 mov x0, x19 .endif #endif .endm We'll update this as well as address the comments on PATCH 1/2 and send a v5. Thanks guys for the review and guidance as I'm wandering a bit in the dark here in arm64 assembler land. Cheers, Kevin
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 520da4c02ece..232f0200e88d 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -36,6 +36,25 @@ .macro ct_user_exit, restore = 0 #ifdef CONFIG_CONTEXT_TRACKING bl context_tracking_user_exit + movz x0, #0xff, lsl #48 + movz x1, #0xff, lsl #48 + movz x2, #0xff, lsl #48 + movz x3, #0xff, lsl #48 + movz x4, #0xff, lsl #48 + movz x5, #0xff, lsl #48 + movz x6, #0xff, lsl #48 + movz x7, #0xff, lsl #48 + movz x8, #0xff, lsl #48 + movz x9, #0xff, lsl #48 + movz x10, #0xff, lsl #48 + movz x11, #0xff, lsl #48 + movz x12, #0xff, lsl #48 + movz x13, #0xff, lsl #48 + movz x14, #0xff, lsl #48 + movz x15, #0xff, lsl #48 + movz x16, #0xff, lsl #48 + movz x17, #0xff, lsl #48 + movz x18, #0xff, lsl #48 .if \restore == 1 /* * Save/restore needed during syscalls. Restore syscall arguments from @@ -60,6 +79,25 @@ push x0, x1 .endif bl context_tracking_user_enter + movz x0, #0xff, lsl #48 + movz x1, #0xff, lsl #48 + movz x2, #0xff, lsl #48 + movz x3, #0xff, lsl #48 + movz x4, #0xff, lsl #48 + movz x5, #0xff, lsl #48 + movz x6, #0xff, lsl #48 + movz x7, #0xff, lsl #48 + movz x8, #0xff, lsl #48 + movz x9, #0xff, lsl #48 + movz x10, #0xff, lsl #48 + movz x11, #0xff, lsl #48 + movz x12, #0xff, lsl #48 + movz x13, #0xff, lsl #48 + movz x14, #0xff, lsl #48 + movz x15, #0xff, lsl #48 + movz x16, #0xff, lsl #48 + movz x17, #0xff, lsl #48 + movz x18, #0xff, lsl #48 .if \save == 1 pop x0, x1 pop x2, x3