Message ID | 20170912003726.368-5-konrad.wilk@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi Konrad, On 12/09/17 01:37, Konrad Rzeszutek Wilk wrote: > The ARM 32&64 ELF specification says "sections containing ARM > code must be at least 32-bit aligned." This patch adds the > check for that. We also make sure that this check is done > when doing relocations for the types that are considered > ARM code. However we don't have to check for all as we only > implement a small subset of them - as such we only check for > data types that are implemented - and if the type is anything else > and not aligned to 32-bit, then we error out. > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> > --- > v1: Initial patch > v2: Redo the commit to include the commits which fix the alignment issues. > Also mention the need in the docs > v3: Change the docs to explicitly mention text code section alignment requirements. > Invert arch_livepatch_verify_alignment return value (true for alignment is ok). > Drop the alignment check in check_special_sections. > Make the alignment check in check_section only for executable sections. > Rewrote the commit message as it is not applicable to v2 of the patch anymore. > --- > docs/misc/livepatch.markdown | 2 ++ > xen/arch/arm/arm32/livepatch.c | 22 ++++++++++++++++++++-- > xen/arch/arm/arm64/livepatch.c | 6 ++++++ > xen/arch/x86/livepatch.c | 6 ++++++ > xen/common/livepatch.c | 7 +++++++ > xen/include/xen/livepatch.h | 1 + > 6 files changed, 42 insertions(+), 2 deletions(-) > > diff --git a/docs/misc/livepatch.markdown b/docs/misc/livepatch.markdown > index 54a6b850cb..505dc37cda 100644 > --- a/docs/misc/livepatch.markdown > +++ b/docs/misc/livepatch.markdown > @@ -279,6 +279,8 @@ It may also have some architecture-specific sections. For example: > * Exception tables. > * Relocations for each of these sections. > > +Note that on ARM 32 the sections containing code MUST be four byte aligned. > + > The Xen Live Patch core code loads the payload as a standard ELF binary, relocates it > and handles the architecture-specifc sections as needed. This process is much > like what the Linux kernel module loader does. > diff --git a/xen/arch/arm/arm32/livepatch.c b/xen/arch/arm/arm32/livepatch.c > index 41378a54ae..10887ace81 100644 > --- a/xen/arch/arm/arm32/livepatch.c > +++ b/xen/arch/arm/arm32/livepatch.c > @@ -112,6 +112,15 @@ bool arch_livepatch_symbol_deny(const struct livepatch_elf *elf, > return false; > } > > +bool arch_livepatch_verify_alignment(const struct livepatch_elf_sec *sec) > +{ > + if ( sec->sec->sh_flags & SHF_EXECINSTR && NIT: May I ask to add (...) here? > + ((uint32_t)sec->load_addr % sizeof(uint32_t)) ) > + return false; > + > + return true; > +}; > + > static s32 get_addend(unsigned char type, void *dest) > { > s32 addend = 0; > @@ -233,7 +242,7 @@ int arch_livepatch_perform(struct livepatch_elf *elf, > uint32_t val; > void *dest; > unsigned char type; > - s32 addend; > + s32 addend = 0; > > if ( use_rela ) > { > @@ -251,7 +260,6 @@ int arch_livepatch_perform(struct livepatch_elf *elf, > symndx = ELF32_R_SYM(r->r_info); > type = ELF32_R_TYPE(r->r_info); > dest = base->load_addr + r->r_offset; /* P */ > - addend = get_addend(type, dest); > } > > if ( symndx == STN_UNDEF ) > @@ -272,6 +280,16 @@ int arch_livepatch_perform(struct livepatch_elf *elf, > elf->name, symndx); > return -EINVAL; > } > + else if ( (type != R_ARM_ABS32 && type != R_ARM_REL32) /* Only check code. */ && > + ((uint32_t)dest % sizeof(uint32_t)) ) > + { > + dprintk(XENLOG_ERR, LIVEPATCH "%s: dest=%p (%s) is not aligned properly!\n", > + elf->name, dest, base->name); > + return -EINVAL; > + } > + > + if ( !use_rela ) > + addend = get_addend(type, dest); > > val = elf->sym[symndx].sym->st_value; /* S */ > > diff --git a/xen/arch/arm/arm64/livepatch.c b/xen/arch/arm/arm64/livepatch.c > index 2247b925a0..2728e2a125 100644 > --- a/xen/arch/arm/arm64/livepatch.c > +++ b/xen/arch/arm/arm64/livepatch.c > @@ -86,6 +86,12 @@ bool arch_livepatch_symbol_deny(const struct livepatch_elf *elf, > return false; > } > > +bool arch_livepatch_verify_alignment(const struct livepatch_elf_sec *sec) > +{ > + /* Unaligned access on ARM 64 is OK. */ This function will be called on every section, right? If so, the text segment still have to be aligned on Arm64. Instruction can not be unaligned. The unaligned access is only OK for data section. And the only reason is because the memcpy implementation is performing unaligned access. It seems to have better performance on some core. > + return true; > +} > + > enum aarch64_reloc_op { > RELOC_OP_NONE, > RELOC_OP_ABS, > diff --git a/xen/arch/x86/livepatch.c b/xen/arch/x86/livepatch.c > index 406eb910cc..48d20fdacd 100644 > --- a/xen/arch/x86/livepatch.c > +++ b/xen/arch/x86/livepatch.c > @@ -148,6 +148,12 @@ bool arch_livepatch_symbol_deny(const struct livepatch_elf *elf, > return false; > } > > +bool arch_livepatch_verify_alignment(const struct livepatch_elf_sec *sec) > +{ > + /* Unaligned access on x86 is fine. */ > + return true; > +} > + > int arch_livepatch_perform_rel(struct livepatch_elf *elf, > const struct livepatch_elf_sec *base, > const struct livepatch_elf_sec *rela) > diff --git a/xen/common/livepatch.c b/xen/common/livepatch.c > index c6ee95fbcf..dbab8a3f6f 100644 > --- a/xen/common/livepatch.c > +++ b/xen/common/livepatch.c > @@ -473,6 +473,13 @@ static bool section_ok(const struct livepatch_elf *elf, > return false; > } > > + if ( !arch_livepatch_verify_alignment(sec) ) > + { > + dprintk(XENLOG_ERR, LIVEPATCH "%s: %s text section is not aligned properly!\n", > + elf->name, sec->name); > + return false; > + } > + > return true; > } > > diff --git a/xen/include/xen/livepatch.h b/xen/include/xen/livepatch.h > index 98ec01216b..e9bab87f28 100644 > --- a/xen/include/xen/livepatch.h > +++ b/xen/include/xen/livepatch.h > @@ -76,6 +76,7 @@ void arch_livepatch_init(void); > #include <asm/livepatch.h> > int arch_livepatch_verify_func(const struct livepatch_func *func); > > +bool arch_livepatch_verify_alignment(const struct livepatch_elf_sec *sec); > static inline > unsigned int livepatch_insn_len(const struct livepatch_func *func) > { > Cheers,
diff --git a/docs/misc/livepatch.markdown b/docs/misc/livepatch.markdown index 54a6b850cb..505dc37cda 100644 --- a/docs/misc/livepatch.markdown +++ b/docs/misc/livepatch.markdown @@ -279,6 +279,8 @@ It may also have some architecture-specific sections. For example: * Exception tables. * Relocations for each of these sections. +Note that on ARM 32 the sections containing code MUST be four byte aligned. + The Xen Live Patch core code loads the payload as a standard ELF binary, relocates it and handles the architecture-specifc sections as needed. This process is much like what the Linux kernel module loader does. diff --git a/xen/arch/arm/arm32/livepatch.c b/xen/arch/arm/arm32/livepatch.c index 41378a54ae..10887ace81 100644 --- a/xen/arch/arm/arm32/livepatch.c +++ b/xen/arch/arm/arm32/livepatch.c @@ -112,6 +112,15 @@ bool arch_livepatch_symbol_deny(const struct livepatch_elf *elf, return false; } +bool arch_livepatch_verify_alignment(const struct livepatch_elf_sec *sec) +{ + if ( sec->sec->sh_flags & SHF_EXECINSTR && + ((uint32_t)sec->load_addr % sizeof(uint32_t)) ) + return false; + + return true; +}; + static s32 get_addend(unsigned char type, void *dest) { s32 addend = 0; @@ -233,7 +242,7 @@ int arch_livepatch_perform(struct livepatch_elf *elf, uint32_t val; void *dest; unsigned char type; - s32 addend; + s32 addend = 0; if ( use_rela ) { @@ -251,7 +260,6 @@ int arch_livepatch_perform(struct livepatch_elf *elf, symndx = ELF32_R_SYM(r->r_info); type = ELF32_R_TYPE(r->r_info); dest = base->load_addr + r->r_offset; /* P */ - addend = get_addend(type, dest); } if ( symndx == STN_UNDEF ) @@ -272,6 +280,16 @@ int arch_livepatch_perform(struct livepatch_elf *elf, elf->name, symndx); return -EINVAL; } + else if ( (type != R_ARM_ABS32 && type != R_ARM_REL32) /* Only check code. */ && + ((uint32_t)dest % sizeof(uint32_t)) ) + { + dprintk(XENLOG_ERR, LIVEPATCH "%s: dest=%p (%s) is not aligned properly!\n", + elf->name, dest, base->name); + return -EINVAL; + } + + if ( !use_rela ) + addend = get_addend(type, dest); val = elf->sym[symndx].sym->st_value; /* S */ diff --git a/xen/arch/arm/arm64/livepatch.c b/xen/arch/arm/arm64/livepatch.c index 2247b925a0..2728e2a125 100644 --- a/xen/arch/arm/arm64/livepatch.c +++ b/xen/arch/arm/arm64/livepatch.c @@ -86,6 +86,12 @@ bool arch_livepatch_symbol_deny(const struct livepatch_elf *elf, return false; } +bool arch_livepatch_verify_alignment(const struct livepatch_elf_sec *sec) +{ + /* Unaligned access on ARM 64 is OK. */ + return true; +} + enum aarch64_reloc_op { RELOC_OP_NONE, RELOC_OP_ABS, diff --git a/xen/arch/x86/livepatch.c b/xen/arch/x86/livepatch.c index 406eb910cc..48d20fdacd 100644 --- a/xen/arch/x86/livepatch.c +++ b/xen/arch/x86/livepatch.c @@ -148,6 +148,12 @@ bool arch_livepatch_symbol_deny(const struct livepatch_elf *elf, return false; } +bool arch_livepatch_verify_alignment(const struct livepatch_elf_sec *sec) +{ + /* Unaligned access on x86 is fine. */ + return true; +} + int arch_livepatch_perform_rel(struct livepatch_elf *elf, const struct livepatch_elf_sec *base, const struct livepatch_elf_sec *rela) diff --git a/xen/common/livepatch.c b/xen/common/livepatch.c index c6ee95fbcf..dbab8a3f6f 100644 --- a/xen/common/livepatch.c +++ b/xen/common/livepatch.c @@ -473,6 +473,13 @@ static bool section_ok(const struct livepatch_elf *elf, return false; } + if ( !arch_livepatch_verify_alignment(sec) ) + { + dprintk(XENLOG_ERR, LIVEPATCH "%s: %s text section is not aligned properly!\n", + elf->name, sec->name); + return false; + } + return true; } diff --git a/xen/include/xen/livepatch.h b/xen/include/xen/livepatch.h index 98ec01216b..e9bab87f28 100644 --- a/xen/include/xen/livepatch.h +++ b/xen/include/xen/livepatch.h @@ -76,6 +76,7 @@ void arch_livepatch_init(void); #include <asm/livepatch.h> int arch_livepatch_verify_func(const struct livepatch_func *func); +bool arch_livepatch_verify_alignment(const struct livepatch_elf_sec *sec); static inline unsigned int livepatch_insn_len(const struct livepatch_func *func) {
The ARM 32&64 ELF specification says "sections containing ARM code must be at least 32-bit aligned." This patch adds the check for that. We also make sure that this check is done when doing relocations for the types that are considered ARM code. However we don't have to check for all as we only implement a small subset of them - as such we only check for data types that are implemented - and if the type is anything else and not aligned to 32-bit, then we error out. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> --- v1: Initial patch v2: Redo the commit to include the commits which fix the alignment issues. Also mention the need in the docs v3: Change the docs to explicitly mention text code section alignment requirements. Invert arch_livepatch_verify_alignment return value (true for alignment is ok). Drop the alignment check in check_special_sections. Make the alignment check in check_section only for executable sections. Rewrote the commit message as it is not applicable to v2 of the patch anymore. --- docs/misc/livepatch.markdown | 2 ++ xen/arch/arm/arm32/livepatch.c | 22 ++++++++++++++++++++-- xen/arch/arm/arm64/livepatch.c | 6 ++++++ xen/arch/x86/livepatch.c | 6 ++++++ xen/common/livepatch.c | 7 +++++++ xen/include/xen/livepatch.h | 1 + 6 files changed, 42 insertions(+), 2 deletions(-)