Message ID | 20210503173615.21576-4-madvenka@linux.microsoft.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: Stack trace reliability checks in the unwinder | expand |
On Mon, May 03, 2021 at 12:36:14PM -0500, madvenka@linux.microsoft.com wrote: > There are some SYM_CODE functions that are currently in ".text" or > ".init.text" sections. Some of these are functions that the unwinder > does not care about as they are not "interesting" to livepatch. These > will remain in their current sections. The rest I have moved into a > new section called ".code.text". I was thinking it'd be good to do this by modifying SYM_CODE_START() to emit the section, that way nobody can forget to put any SYM_CODE into a special section. That does mean we'd have to first introduce a new variant for specifying a section that lets us override things that need to be in some specific section and convert everything that's in a special section over to that first which is a bit annoying but feels like it's worth it for the robustness. It'd also put some of the don't cares into .code.text but so long as they are actually don't cares that should be fine! > Don't care functions > ==================== We also have a bunch of things like __cpu_soft_restart which don't seem to be called out here but need to be in .idmap.text.
On 5/6/21 9:12 AM, Mark Brown wrote: > On Mon, May 03, 2021 at 12:36:14PM -0500, madvenka@linux.microsoft.com wrote: > >> There are some SYM_CODE functions that are currently in ".text" or >> ".init.text" sections. Some of these are functions that the unwinder >> does not care about as they are not "interesting" to livepatch. These >> will remain in their current sections. The rest I have moved into a >> new section called ".code.text". > > I was thinking it'd be good to do this by modifying SYM_CODE_START() to > emit the section, that way nobody can forget to put any SYM_CODE into a > special section. That does mean we'd have to first introduce a new > variant for specifying a section that lets us override things that need > to be in some specific section and convert everything that's in a > special section over to that first which is a bit annoying but feels > like it's worth it for the robustness. It'd also put some of the don't > cares into .code.text but so long as they are actually don't cares that > should be fine! > OK. I could make the section an argument to SYM_CODE*() so that a developer will never miss that. Some documentation may be in order so the guidelines are clear. I will do the doc patch separately, if that is alright with you all. About the don't car functions - most of them are OK to be moved into .code.text. But the hypervisor vector related code has a problem. I have not debugged that yet. If I add that code in .code.text, KVM initialization fails in one case. In another case, when I actually test with a VM, the VM does not come up. I am not sure. But it looks like there may be some reloc issue going on. I don't have a handle on this yet. So, for now, I will leave the hypervisor stuff in .text. But I will mark it as TBD in the cover letter so we don't lose track of it. >> Don't care functions >> ==================== > > We also have a bunch of things like __cpu_soft_restart which don't seem > to be called out here but need to be in .idmap.text. > It is already in .idmap.text. /* SPDX-License-Identifier: GPL-2.0-only */ /* * CPU reset routines * * Copyright (C) 2001 Deep Blue Solutions Ltd. * Copyright (C) 2012 ARM Ltd. * Copyright (C) 2015 Huawei Futurewei Technologies. */ #include <linux/linkage.h> #include <asm/assembler.h> #include <asm/sysreg.h> #include <asm/virt.h> .text .pushsection .idmap.text, "awx" /* * __cpu_soft_restart(el2_switch, entry, arg0, arg1, arg2) - Helper for * cpu_soft_restart. * * @el2_switch: Flag to indicate a switch to EL2 is needed. * @entry: Location to jump to for soft reset. * arg0: First argument passed to @entry. (relocation list) * arg1: Second argument passed to @entry.(physical kernel entry) * arg2: Third argument passed to @entry. (physical dtb address) * * Put the CPU into the same state as it would be if it had been reset, and * branch to what would be the reset vector. It must be executed with the * flat identity mapping. */ SYM_CODE_START(__cpu_soft_restart) mov_q x12, INIT_SCTLR_EL1_MMU_OFF pre_disable_mmu_workaround /* * either disable EL1&0 translation regime or disable EL2&0 translation * regime if HCR_EL2.E2H == 1 */ msr sctlr_el1, x12 isb cbz x0, 1f // el2_switch? mov x0, #HVC_SOFT_RESTART hvc #0 // no return 1: mov x8, x1 // entry mov x0, x2 // arg0 mov x1, x3 // arg1 mov x2, x4 // arg2 br x8 SYM_CODE_END(__cpu_soft_restart) .popsection I will double check all the *.S files and make sure that every function is accounted for in version 4. Stay tuned. Thanks. Madhavan
On 5/6/21 10:30 AM, Madhavan T. Venkataraman wrote: >> I was thinking it'd be good to do this by modifying SYM_CODE_START() to >> emit the section, that way nobody can forget to put any SYM_CODE into a >> special section. That does mean we'd have to first introduce a new >> variant for specifying a section that lets us override things that need >> to be in some specific section and convert everything that's in a >> special section over to that first which is a bit annoying but feels >> like it's worth it for the robustness. It'd also put some of the don't >> cares into .code.text but so long as they are actually don't cares that >> should be fine! >> > OK. I could make the section an argument to SYM_CODE*() so that a developer > will never miss that. Some documentation may be in order so the guidelines > are clear. I will do the doc patch separately, if that is alright with > you all. There is just one problem with this. Sometimes, there is some data in the same text section. That data will not get included when we do the SYM_CODE(section) change. Madhavan
On Thu, May 06, 2021 at 10:30:21AM -0500, Madhavan T. Venkataraman wrote: > On 5/6/21 9:12 AM, Mark Brown wrote: > > On Mon, May 03, 2021 at 12:36:14PM -0500, madvenka@linux.microsoft.com wrote: > > I was thinking it'd be good to do this by modifying SYM_CODE_START() to > > emit the section, that way nobody can forget to put any SYM_CODE into a > > special section. That does mean we'd have to first introduce a new > OK. I could make the section an argument to SYM_CODE*() so that a developer > will never miss that. Some documentation may be in order so the guidelines > are clear. I will do the doc patch separately, if that is alright with > you all. I was thinking to have standard SYM_CODE default to a section then a variant for anything that cares (like how we have SYM_FUNC_PI and friends for the PI code for EFI). > > We also have a bunch of things like __cpu_soft_restart which don't seem > > to be called out here but need to be in .idmap.text. > It is already in .idmap.text. Right, I meant that I was expecting to see things that need to be in a specific section other than .code.text called out separately here if we're enumerating them. Though if the annotations are done separately then this patch wouldn't need to do that calling out at all, it'd be covered as part of fiddling around with the annotations.
On Thu, May 06, 2021 at 10:32:30AM -0500, Madhavan T. Venkataraman wrote: > On 5/6/21 10:30 AM, Madhavan T. Venkataraman wrote: > > OK. I could make the section an argument to SYM_CODE*() so that a developer > > will never miss that. Some documentation may be in order so the guidelines > > are clear. I will do the doc patch separately, if that is alright with > > you all. > There is just one problem with this. Sometimes, there is some data in the > same text section. That data will not get included when we do the SYM_CODE(section) > change. Yes, data would need to be handled separately still. That doesn't seem insurmountable though?
On 5/6/21 10:44 AM, Mark Brown wrote: > On Thu, May 06, 2021 at 10:32:30AM -0500, Madhavan T. Venkataraman wrote: >> On 5/6/21 10:30 AM, Madhavan T. Venkataraman wrote: > >>> OK. I could make the section an argument to SYM_CODE*() so that a developer >>> will never miss that. Some documentation may be in order so the guidelines >>> are clear. I will do the doc patch separately, if that is alright with >>> you all. > >> There is just one problem with this. Sometimes, there is some data in the >> same text section. That data will not get included when we do the SYM_CODE(section) >> change. > > Yes, data would need to be handled separately still. That doesn't seem > insurmountable though? > I will think of something. Madhavan
On 5/6/21 10:37 AM, Mark Brown wrote: > On Thu, May 06, 2021 at 10:30:21AM -0500, Madhavan T. Venkataraman wrote: >> On 5/6/21 9:12 AM, Mark Brown wrote: >>> On Mon, May 03, 2021 at 12:36:14PM -0500, madvenka@linux.microsoft.com wrote: > >>> I was thinking it'd be good to do this by modifying SYM_CODE_START() to >>> emit the section, that way nobody can forget to put any SYM_CODE into a >>> special section. That does mean we'd have to first introduce a new > >> OK. I could make the section an argument to SYM_CODE*() so that a developer >> will never miss that. Some documentation may be in order so the guidelines >> are clear. I will do the doc patch separately, if that is alright with >> you all. > > I was thinking to have standard SYM_CODE default to a section then a > variant for anything that cares (like how we have SYM_FUNC_PI and > friends for the PI code for EFI). > OK. >>> We also have a bunch of things like __cpu_soft_restart which don't seem >>> to be called out here but need to be in .idmap.text. > >> It is already in .idmap.text. > > Right, I meant that I was expecting to see things that need to be in a > specific section other than .code.text called out separately here if > we're enumerating them. Though if the annotations are done separately > then this patch wouldn't need to do that calling out at all, it'd be > covered as part of fiddling around with the annotations. > OK. Madhavan
diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h index 2f36b16a5b5d..bceda68aaa79 100644 --- a/arch/arm64/include/asm/sections.h +++ b/arch/arm64/include/asm/sections.h @@ -20,5 +20,6 @@ extern char __exittext_begin[], __exittext_end[]; extern char __irqentry_text_start[], __irqentry_text_end[]; extern char __mmuoff_data_start[], __mmuoff_data_end[]; extern char __entry_tramp_text_start[], __entry_tramp_text_end[]; +extern char __code_text_start[], __code_text_end[]; #endif /* __ASM_SECTIONS_H */ diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index b3e4f9a088b1..c0831a49c290 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -12,7 +12,9 @@ #include <asm/ftrace.h> #include <asm/insn.h> + .text #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS + .pushsection ".code.text", "ax" /* * Due to -fpatchable-function-entry=2, the compiler has placed two NOPs before * the regular function prologue. For an enabled callsite, ftrace_init_nop() and @@ -135,6 +137,7 @@ SYM_CODE_START(ftrace_graph_caller) b ftrace_common_return SYM_CODE_END(ftrace_graph_caller) #endif + .popsection #else /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ @@ -315,6 +318,7 @@ SYM_FUNC_START(ftrace_stub) SYM_FUNC_END(ftrace_stub) #ifdef CONFIG_FUNCTION_GRAPH_TRACER + .pushsection ".code.text", "ax" /* * void return_to_handler(void) * @@ -342,4 +346,5 @@ SYM_CODE_START(return_to_handler) ret SYM_CODE_END(return_to_handler) + .popsection #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 6acfc5e6b5e0..3f9f7f80cd65 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -402,6 +402,7 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 .endm #ifdef CONFIG_ARM64_SW_TTBR0_PAN + .pushsection ".code.text", "ax" /* * Set the TTBR0 PAN bit in SPSR. When the exception is taken from * EL0, there is no need to check the state of TTBR0_EL1 since @@ -442,6 +443,7 @@ SYM_CODE_START_LOCAL(__swpan_exit_el0) */ b post_ttbr_update_workaround SYM_CODE_END(__swpan_exit_el0) + .popsection #endif .macro irq_stack_entry @@ -950,6 +952,7 @@ SYM_FUNC_START(cpu_switch_to) SYM_FUNC_END(cpu_switch_to) NOKPROBE(cpu_switch_to) + .pushsection ".code.text", "ax" /* * This is how we return from a fork. */ @@ -962,6 +965,7 @@ SYM_CODE_START(ret_from_fork) b ret_to_user SYM_CODE_END(ret_from_fork) NOKPROBE(ret_from_fork) + .popsection #ifdef CONFIG_ARM_SDE_INTERFACE @@ -1040,6 +1044,7 @@ SYM_DATA_END(__sdei_asm_trampoline_next_handler) #endif /* CONFIG_RANDOMIZE_BASE */ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ + .pushsection ".code.text", "ax" /* * Software Delegated Exception entry point. * @@ -1150,4 +1155,5 @@ alternative_else_nop_endif #endif SYM_CODE_END(__sdei_asm_handler) NOKPROBE(__sdei_asm_handler) + .popsection #endif /* CONFIG_ARM_SDE_INTERFACE */ diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 840bda1869e9..4ce96dfac2b8 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -75,7 +75,7 @@ __EFI_PE_HEADER __INIT - + .pushsection ".code.text", "ax" /* * The following callee saved general purpose registers are used on the * primary lowlevel boot path: @@ -120,6 +120,7 @@ SYM_CODE_START_LOCAL(preserve_boot_args) mov x1, #0x20 // 4 x 8 bytes b __inval_dcache_area // tail call SYM_CODE_END(preserve_boot_args) + .popsection /* * Macro to create a table entry to the next page. diff --git a/arch/arm64/kernel/probes/kprobes_trampoline.S b/arch/arm64/kernel/probes/kprobes_trampoline.S index 288a84e253cc..9244e119af3e 100644 --- a/arch/arm64/kernel/probes/kprobes_trampoline.S +++ b/arch/arm64/kernel/probes/kprobes_trampoline.S @@ -8,6 +8,7 @@ #include <asm/assembler.h> .text + .pushsection ".code.text", "ax" .macro save_all_base_regs stp x0, x1, [sp, #S_X0] @@ -80,3 +81,4 @@ SYM_CODE_START(kretprobe_trampoline) ret SYM_CODE_END(kretprobe_trampoline) + .popsection diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 1ff14615a55a..33e174160f9b 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -43,6 +43,8 @@ struct code_range sym_code_ranges[] = { (unsigned long)__entry_tramp_text_start, (unsigned long)__entry_tramp_text_end }, #endif + { (unsigned long)__code_text_start, + (unsigned long)__code_text_end }, { /* sentinel */ } }; diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 7eea7888bb02..c00b3232e6dc 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -103,6 +103,12 @@ jiffies = jiffies_64; #define TRAMP_TEXT #endif +#define CODE_TEXT \ + . = ALIGN(SZ_4K); \ + __code_text_start = .; \ + *(.code.text) \ + __code_text_end = .; + /* * The size of the PE/COFF section that covers the kernel image, which * runs from _stext to _edata, must be a round multiple of the PE/COFF @@ -145,6 +151,7 @@ SECTIONS SOFTIRQENTRY_TEXT ENTRY_TEXT TEXT_TEXT + CODE_TEXT SCHED_TEXT CPUIDLE_TEXT LOCK_TEXT