Message ID | 20240411160526.2093408-4-rppt@kernel.org (mailing list archive) |
---|---|
State | Handled Elsewhere |
Headers | show |
Series | x86/module: use large ROX pages for text allocations | expand |
> On 11 Apr 2024, at 19:05, Mike Rapoport <rppt@kernel.org> wrote: > > @@ -2440,7 +2479,24 @@ static int post_relocation(struct module *mod, const struct load_info *info) > add_kallsyms(mod, info); > > /* Arch-specific module finalizing. */ > - return module_finalize(info->hdr, info->sechdrs, mod); > + ret = module_finalize(info->hdr, info->sechdrs, mod); > + if (ret) > + return ret; > + > + for_each_mod_mem_type(type) { > + struct module_memory *mem = &mod->mem[type]; > + > + if (mem->is_rox) { > + if (!execmem_update_copy(mem->base, mem->rw_copy, > + mem->size)) > + return -ENOMEM; > + > + vfree(mem->rw_copy); > + mem->rw_copy = NULL; > + } > + } > + > + return 0; > } I might be missing something, but it seems a bit racy. IIUC, module_finalize() calls alternatives_smp_module_add(). At this point, since you don’t hold the text_mutex, some might do text_poke(), e.g., by enabling/disabling static-key, and the update would be overwritten. No?
On Tue, Apr 16, 2024 at 12:36:08PM +0300, Nadav Amit wrote: > > > > On 11 Apr 2024, at 19:05, Mike Rapoport <rppt@kernel.org> wrote: > > > > @@ -2440,7 +2479,24 @@ static int post_relocation(struct module *mod, const struct load_info *info) > > add_kallsyms(mod, info); > > > > /* Arch-specific module finalizing. */ > > - return module_finalize(info->hdr, info->sechdrs, mod); > > + ret = module_finalize(info->hdr, info->sechdrs, mod); > > + if (ret) > > + return ret; > > + > > + for_each_mod_mem_type(type) { > > + struct module_memory *mem = &mod->mem[type]; > > + > > + if (mem->is_rox) { > > + if (!execmem_update_copy(mem->base, mem->rw_copy, > > + mem->size)) > > + return -ENOMEM; > > + > > + vfree(mem->rw_copy); > > + mem->rw_copy = NULL; > > + } > > + } > > + > > + return 0; > > } > > I might be missing something, but it seems a bit racy. > > IIUC, module_finalize() calls alternatives_smp_module_add(). At this > point, since you don’t hold the text_mutex, some might do text_poke(), > e.g., by enabling/disabling static-key, and the update would be > overwritten. No? Right :( Even worse, for UP case alternatives_smp_unlock() will "patch" still empty area. So I'm thinking about calling alternatives_smp_module_add() from an additional callback after the execmem_update_copy(). Does it make sense to you?
> On 18 Apr 2024, at 13:20, Mike Rapoport <rppt@kernel.org> wrote: > > On Tue, Apr 16, 2024 at 12:36:08PM +0300, Nadav Amit wrote: >> >> >> >> I might be missing something, but it seems a bit racy. >> >> IIUC, module_finalize() calls alternatives_smp_module_add(). At this >> point, since you don’t hold the text_mutex, some might do text_poke(), >> e.g., by enabling/disabling static-key, and the update would be >> overwritten. No? > > Right :( > Even worse, for UP case alternatives_smp_unlock() will "patch" still empty > area. > > So I'm thinking about calling alternatives_smp_module_add() from an > additional callback after the execmem_update_copy(). > > Does it make sense to you? Going over the code again - I might have just been wrong: I confused the alternatives and the jump-label mechanisms (as they do share a lot of code and characteristics). The jump-labels are updated when prepare_coming_module() is called, which happens after post_relocation() [which means they would be updated using text_poke() “inefficiently” but should be safe]. The “alternatives” appear only to use text_poke() (in contrast for text_poke_early()) from very specific few flows, e.g., common_cpu_up() -> alternatives_enable_smp(). Are those flows pose a problem after boot? Anyhow, sorry for the noise.
On Thu, Apr 18, 2024 at 10:31:16PM +0300, Nadav Amit wrote: > > > On 18 Apr 2024, at 13:20, Mike Rapoport <rppt@kernel.org> wrote: > > > > On Tue, Apr 16, 2024 at 12:36:08PM +0300, Nadav Amit wrote: > >> > >> > >> > >> I might be missing something, but it seems a bit racy. > >> > >> IIUC, module_finalize() calls alternatives_smp_module_add(). At this > >> point, since you don’t hold the text_mutex, some might do text_poke(), > >> e.g., by enabling/disabling static-key, and the update would be > >> overwritten. No? > > > > Right :( > > Even worse, for UP case alternatives_smp_unlock() will "patch" still empty > > area. > > > > So I'm thinking about calling alternatives_smp_module_add() from an > > additional callback after the execmem_update_copy(). > > > > Does it make sense to you? > > Going over the code again - I might have just been wrong: I confused the > alternatives and the jump-label mechanisms (as they do share a lot of > code and characteristics). > > The jump-labels are updated when prepare_coming_module() is called, which > happens after post_relocation() [which means they would be updated using > text_poke() “inefficiently” but should be safe]. > > The “alternatives” appear only to use text_poke() (in contrast for > text_poke_early()) from very specific few flows, e.g., > common_cpu_up() -> alternatives_enable_smp(). > > Are those flows pose a problem after boot? Yes, common_cpu_up is called on CPU hotplug, so it's possible to have a race between alternatives_smp_module_add() and common_cpu_up() -> alternatives_enable_smp(). And in UP case alternatives_smp_module_add() will call alternatives_smp_unlock() that will patch module text before it is updated. > Anyhow, sorry for the noise. On the contrary, I would have missed it.
diff --git a/include/linux/execmem.h b/include/linux/execmem.h index ffd0d12feef5..9d22999dbd7d 100644 --- a/include/linux/execmem.h +++ b/include/linux/execmem.h @@ -46,9 +46,11 @@ enum execmem_type { /** * enum execmem_range_flags - options for executable memory allocations * @EXECMEM_KASAN_SHADOW: allocate kasan shadow + * @EXECMEM_READ_ONLY: allocated memory should be mapped as read only */ enum execmem_range_flags { EXECMEM_KASAN_SHADOW = (1 << 0), + EXECMEM_ROX_CACHE = (1 << 1), }; /** @@ -123,6 +125,27 @@ void *execmem_alloc(enum execmem_type type, size_t size); */ void execmem_free(void *ptr); +/** + * execmem_update_copy - copy an update to executable memory + * @dst: destination address to update + * @src: source address containing the data + * @size: how many bytes of memory shold be copied + * + * Copy @size bytes from @src to @dst using text poking if the memory at + * @dst is read-only. + * + * Return: a pointer to @dst or NULL on error + */ +void *execmem_update_copy(void *dst, const void *src, size_t size); + +/** + * execmem_is_rox - check if execmem is read-only + * @type - the execmem type to check + * + * Return: %true if the @type is read-only, %false if it's writable + */ +bool execmem_is_rox(enum execmem_type type); + #ifdef CONFIG_ARCH_WANTS_EXECMEM_EARLY void execmem_early_init(void); #else diff --git a/include/linux/module.h b/include/linux/module.h index 1153b0d99a80..3df3202680a2 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -361,6 +361,8 @@ enum mod_mem_type { struct module_memory { void *base; + void *rw_copy; + bool is_rox; unsigned int size; #ifdef CONFIG_MODULES_TREE_LOOKUP @@ -368,6 +370,15 @@ struct module_memory { #endif }; +#ifdef CONFIG_MODULES +unsigned long module_writable_offset(struct module *mod, void *loc); +#else +static inline unsigned long module_writable_offset(struct module *mod, void *loc) +{ + return 0; +} +#endif + #ifdef CONFIG_MODULES_TREE_LOOKUP /* Only touch one cacheline for common rbtree-for-core-layout case. */ #define __module_memory_align ____cacheline_aligned diff --git a/kernel/module/main.c b/kernel/module/main.c index 91e185607d4b..f83fbb9c95ee 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1188,6 +1188,21 @@ void __weak module_arch_freeing_init(struct module *mod) { } +unsigned long module_writable_offset(struct module *mod, void *loc) +{ + if (!mod) + return 0; + + for_class_mod_mem_type(type, text) { + struct module_memory *mem = &mod->mem[type]; + + if (loc >= mem->base && loc < mem->base + mem->size) + return (unsigned long)(mem->rw_copy - mem->base); + } + + return 0; +} + static int module_memory_alloc(struct module *mod, enum mod_mem_type type) { unsigned int size = PAGE_ALIGN(mod->mem[type].size); @@ -1205,6 +1220,23 @@ static int module_memory_alloc(struct module *mod, enum mod_mem_type type) if (!ptr) return -ENOMEM; + mod->mem[type].base = ptr; + + if (execmem_is_rox(execmem_type)) { + ptr = vzalloc(size); + + if (!ptr) { + execmem_free(mod->mem[type].base); + return -ENOMEM; + } + + mod->mem[type].rw_copy = ptr; + mod->mem[type].is_rox = true; + } else { + mod->mem[type].rw_copy = mod->mem[type].base; + memset(mod->mem[type].base, 0, size); + } + /* * The pointer to these blocks of memory are stored on the module * structure and we keep that around so long as the module is @@ -1218,15 +1250,16 @@ static int module_memory_alloc(struct module *mod, enum mod_mem_type type) */ kmemleak_not_leak(ptr); - memset(ptr, 0, size); - mod->mem[type].base = ptr; - return 0; } static void module_memory_free(struct module *mod, enum mod_mem_type type) { - void *ptr = mod->mem[type].base; + struct module_memory *mem = &mod->mem[type]; + void *ptr = mem->base; + + if (mem->is_rox) + vfree(mem->rw_copy); execmem_free(ptr); } @@ -2237,6 +2270,7 @@ static int move_module(struct module *mod, struct load_info *info) for_each_mod_mem_type(type) { if (!mod->mem[type].size) { mod->mem[type].base = NULL; + mod->mem[type].rw_copy = NULL; continue; } @@ -2253,11 +2287,14 @@ static int move_module(struct module *mod, struct load_info *info) void *dest; Elf_Shdr *shdr = &info->sechdrs[i]; enum mod_mem_type type = shdr->sh_entsize >> SH_ENTSIZE_TYPE_SHIFT; + unsigned long offset = shdr->sh_entsize & SH_ENTSIZE_OFFSET_MASK; + unsigned long addr; if (!(shdr->sh_flags & SHF_ALLOC)) continue; - dest = mod->mem[type].base + (shdr->sh_entsize & SH_ENTSIZE_OFFSET_MASK); + addr = (unsigned long)mod->mem[type].base + offset; + dest = mod->mem[type].rw_copy + offset; if (shdr->sh_type != SHT_NOBITS) { /* @@ -2279,7 +2316,7 @@ static int move_module(struct module *mod, struct load_info *info) * users of info can keep taking advantage and using the newly * minted official memory area. */ - shdr->sh_addr = (unsigned long)dest; + shdr->sh_addr = addr; pr_debug("\t0x%lx 0x%.8lx %s\n", (long)shdr->sh_addr, (long)shdr->sh_size, info->secstrings + shdr->sh_name); } @@ -2429,6 +2466,8 @@ int __weak module_finalize(const Elf_Ehdr *hdr, static int post_relocation(struct module *mod, const struct load_info *info) { + int ret; + /* Sort exception table now relocations are done. */ sort_extable(mod->extable, mod->extable + mod->num_exentries); @@ -2440,7 +2479,24 @@ static int post_relocation(struct module *mod, const struct load_info *info) add_kallsyms(mod, info); /* Arch-specific module finalizing. */ - return module_finalize(info->hdr, info->sechdrs, mod); + ret = module_finalize(info->hdr, info->sechdrs, mod); + if (ret) + return ret; + + for_each_mod_mem_type(type) { + struct module_memory *mem = &mod->mem[type]; + + if (mem->is_rox) { + if (!execmem_update_copy(mem->base, mem->rw_copy, + mem->size)) + return -ENOMEM; + + vfree(mem->rw_copy); + mem->rw_copy = NULL; + } + } + + return 0; } /* Call module constructors. */ diff --git a/kernel/module/strict_rwx.c b/kernel/module/strict_rwx.c index c45caa4690e5..239e5013359d 100644 --- a/kernel/module/strict_rwx.c +++ b/kernel/module/strict_rwx.c @@ -34,6 +34,9 @@ int module_enable_text_rox(const struct module *mod) for_class_mod_mem_type(type, text) { int ret; + if (mod->mem[type].is_rox) + continue; + if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) ret = module_set_memory(mod, type, set_memory_rox); else diff --git a/mm/execmem.c b/mm/execmem.c index aabc0afabdbc..c920d2b5a721 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -4,6 +4,7 @@ #include <linux/vmalloc.h> #include <linux/execmem.h> #include <linux/moduleloader.h> +#include <linux/text-patching.h> static struct execmem_info *execmem_info __ro_after_init; static struct execmem_info default_execmem_info __ro_after_init; @@ -63,6 +64,16 @@ void execmem_free(void *ptr) vfree(ptr); } +void *execmem_update_copy(void *dst, const void *src, size_t size) +{ + return text_poke_copy(dst, src, size); +} + +bool execmem_is_rox(enum execmem_type type) +{ + return !!(execmem_info->ranges[type].flags & EXECMEM_ROX_CACHE); +} + static bool execmem_validate(struct execmem_info *info) { struct execmem_range *r = &info->ranges[EXECMEM_DEFAULT];
From: "Mike Rapoport (IBM)" <rppt@kernel.org> In order to support ROX allocations for module text, it is necessary to handle modifications to the code, such as relocations and alternatives patching, without write access to that memory. One option is to use text patching, but this would make module loading extremely slow and will expose executable code that is not finally formed. A better way is to have memory allocated with ROX permissions contain invalid instructions and keep a writable, but not executable copy of the module text. The relocations and alternative patches would be done on the writable copy using the addresses of the ROX memory. Once the module is completely ready, the updated text will be copied to ROX memory using text patching in one go and the writable copy will be freed. Add support for that to module initialization code and provide necessary interfaces in execmem. --- include/linux/execmem.h | 23 +++++++++++++ include/linux/module.h | 11 ++++++ kernel/module/main.c | 70 ++++++++++++++++++++++++++++++++++---- kernel/module/strict_rwx.c | 3 ++ mm/execmem.c | 11 ++++++ 5 files changed, 111 insertions(+), 7 deletions(-)