Message ID | 20181205013408.47725-3-namit@vmware.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86/alternative: text_poke() enhancements | expand |
diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c index aac0c1f7e354..ed5fe274a7d8 100644 --- a/arch/x86/kernel/jump_label.c +++ b/arch/x86/kernel/jump_label.c @@ -52,7 +52,12 @@ static void __ref __jump_label_transform(struct jump_entry *entry, jmp.offset = jump_entry_target(entry) - (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE); - if (early_boot_irqs_disabled) + /* + * As long as we're UP and not yet marked RO, we can use + * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to + * SYSTEM_SCHEDULING before going either. + */ + if (system_state == SYSTEM_BOOTING) poker = text_poke_early; if (type == JUMP_LABEL_JMP) {
There is no apparent reason not to use text_poke_early() while we are during early-init and we do not patch code that might be on the stack (i.e., we'll return to the middle of the patched code). This appears to be the case of jump-labels, so do so. This is required for the next patches that would set a temporary mm for patching, which is initialized after some static-keys are enabled/disabled. Cc: Andy Lutomirski <luto@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Nadav Amit <namit@vmware.com> --- arch/x86/kernel/jump_label.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)