diff mbox series

[v3,3/7] x86: re-work memcpy()

Message ID c2aa4307-230b-4287-b9e4-6d7d84dba490@suse.com (mailing list archive)
State New
Headers show
Series x86: memcpy() / memset() (non-)ERMS flavors plus fallout | expand

Commit Message

Jan Beulich Nov. 25, 2024, 2:28 p.m. UTC
Move the function to its own assembly file. Having it in C just for the
entire body to be an asm() isn't really helpful. Then have two flavors:
A "basic" version using qword steps for the bulk of the operation, and an
ERMS version for modern hardware, to be substituted in via alternatives
patching.

Alternatives patching, however, requires an extra precaution: It uses
memcpy() itself, and hence the function may patch itself. Luckily the
patched-in code only replaces the prolog of the original function. Make
sure this remains this way.

Additionally alternatives patching, while supposedly safe via enforcing
a control flow change when modifying already prefetched code, may not
really be. Afaict a request is pending to drop the first of the two
options in the SDM's "Handling Self- and Cross-Modifying Code" section.
Insert a serializing instruction there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
We may want to consider branching over the REP MOVSQ as well, if the
number of qwords turns out to be zero.
We may also want to consider using non-REP MOVS{L,W,B} for the tail.

TBD: We may further need a workaround similar to Linux'es 8ca97812c3c8
     ("x86/mce: Work around an erratum on fast string copy
     instructions").
---
v3: Re-base.

Comments

Andrew Cooper Nov. 26, 2024, 7:16 p.m. UTC | #1
On 25/11/2024 2:28 pm, Jan Beulich wrote:
> Move the function to its own assembly file. Having it in C just for the
> entire body to be an asm() isn't really helpful. Then have two flavors:
> A "basic" version using qword steps for the bulk of the operation, and an
> ERMS version for modern hardware, to be substituted in via alternatives
> patching.
>
> Alternatives patching, however, requires an extra precaution: It uses
> memcpy() itself, and hence the function may patch itself. Luckily the
> patched-in code only replaces the prolog of the original function. Make
> sure this remains this way.
>
> Additionally alternatives patching, while supposedly safe via enforcing
> a control flow change when modifying already prefetched code, may not
> really be. Afaict a request is pending to drop the first of the two
> options in the SDM's "Handling Self- and Cross-Modifying Code" section.
> Insert a serializing instruction there.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> We may want to consider branching over the REP MOVSQ as well, if the
> number of qwords turns out to be zero.
> We may also want to consider using non-REP MOVS{L,W,B} for the tail.

My feedback for patch 2 is largely applicable here too.

>
> TBD: We may further need a workaround similar to Linux'es 8ca97812c3c8
>      ("x86/mce: Work around an erratum on fast string copy
>      instructions").

Ah, so you found that erratum.  I'd say there's lower hanging fruit to
go after in the MCE logic before we get to this.

> ---
> v3: Re-base.
>
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -48,6 +48,7 @@ obj-$(CONFIG_INDIRECT_THUNK) += indirect
>  obj-$(CONFIG_PV) += ioport_emulate.o
>  obj-y += irq.o
>  obj-$(CONFIG_KEXEC) += machine_kexec.o
> +obj-y += memcpy.o
>  obj-y += memset.o
>  obj-y += mm.o x86_64/mm.o
>  obj-$(CONFIG_HVM) += monitor.o
> --- a/xen/arch/x86/alternative.c
> +++ b/xen/arch/x86/alternative.c
> @@ -153,12 +153,14 @@ void init_or_livepatch add_nops(void *in
>   * executing.
>   *
>   * "noinline" to cause control flow change and thus invalidate I$ and
> - * cause refetch after modification.
> + * cause refetch after modification.  While the SDM continues to suggest this
> + * is sufficient, it may not be - issue a serializing insn afterwards as well.

Did you find a problem in practice, or is this just in case?

I suspect if you are seeing problems, then it's non-atomicity of the
stores into memcpy() rather than serialisation.

>   */
>  static void init_or_livepatch noinline
>  text_poke(void *addr, const void *opcode, size_t len)
>  {
>      memcpy(addr, opcode, len);
> +    cpuid_eax(0);

This whole function is buggy in a couple of ways, starting with the
comments.

The comment about noinline and control flow changes is only really
relevant to 32bit processors; we inherited that comment from Linux, and
they're not applicable to Xen.

AMD64 (both the APM, and SDM) guarantee that Self Modifying Code will be
dealt with on your behalf, with no serialisation needed.

Cross-modifying code needs far more severe serialisation than given
here.  We get away with it because alternative_{instructions,branches}()
are pre-SMP, and apply_alternatives() is on livepatches prior to them
becoming live.


I happen to know there's an AMD CPU which has an erratum regarding Self
Modifying Code and genuinely does need a serialising instruction, but I
don't know which exact CPU it is.

If we're going to put a serialising instruction, it should be a write to
CR2.  We don't care about 486 compatibility, and it's faster than CPUID
and much much faster if virtualised because it's unlikely to be
intercepted even under shadow paging.

But, it would be nice not to put serialisation in the general case to
begin with, especially not into the livepatching case.

~Andrew
Jan Beulich Nov. 27, 2024, 10:05 a.m. UTC | #2
On 26.11.2024 20:16, Andrew Cooper wrote:
> On 25/11/2024 2:28 pm, Jan Beulich wrote:
>> Move the function to its own assembly file. Having it in C just for the
>> entire body to be an asm() isn't really helpful. Then have two flavors:
>> A "basic" version using qword steps for the bulk of the operation, and an
>> ERMS version for modern hardware, to be substituted in via alternatives
>> patching.
>>
>> Alternatives patching, however, requires an extra precaution: It uses
>> memcpy() itself, and hence the function may patch itself. Luckily the
>> patched-in code only replaces the prolog of the original function. Make
>> sure this remains this way.
>>
>> Additionally alternatives patching, while supposedly safe via enforcing
>> a control flow change when modifying already prefetched code, may not
>> really be. Afaict a request is pending to drop the first of the two
>> options in the SDM's "Handling Self- and Cross-Modifying Code" section.
>> Insert a serializing instruction there.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> We may want to consider branching over the REP MOVSQ as well, if the
>> number of qwords turns out to be zero.
>> We may also want to consider using non-REP MOVS{L,W,B} for the tail.
> 
> My feedback for patch 2 is largely applicable here too.

Sure, and I'll apply here whatever we decide to do there.

>> --- a/xen/arch/x86/alternative.c
>> +++ b/xen/arch/x86/alternative.c
>> @@ -153,12 +153,14 @@ void init_or_livepatch add_nops(void *in
>>   * executing.
>>   *
>>   * "noinline" to cause control flow change and thus invalidate I$ and
>> - * cause refetch after modification.
>> + * cause refetch after modification.  While the SDM continues to suggest this
>> + * is sufficient, it may not be - issue a serializing insn afterwards as well.
> 
> Did you find a problem in practice, or is this just in case?

It's been too long, so I can now only guess that it's just in case. The
comment change, otoh, suggests otherwise.

> I suspect if you are seeing problems, then it's non-atomicity of the
> stores into memcpy() rather than serialisation.

How would atomicity (or not) matter here? There shouldn't be any difference
between a single and any number of stores into the (previously executed)
insn stream.

>>   */
>>  static void init_or_livepatch noinline
>>  text_poke(void *addr, const void *opcode, size_t len)
>>  {
>>      memcpy(addr, opcode, len);
>> +    cpuid_eax(0);
> 
> This whole function is buggy in a couple of ways, starting with the
> comments.
> 
> The comment about noinline and control flow changes is only really
> relevant to 32bit processors; we inherited that comment from Linux, and
> they're not applicable to Xen.
> 
> AMD64 (both the APM, and SDM) guarantee that Self Modifying Code will be
> dealt with on your behalf, with no serialisation needed.
> 
> Cross-modifying code needs far more severe serialisation than given
> here.  We get away with it because alternative_{instructions,branches}()
> are pre-SMP, and apply_alternatives() is on livepatches prior to them
> becoming live.
> 
> 
> I happen to know there's an AMD CPU which has an erratum regarding Self
> Modifying Code and genuinely does need a serialising instruction, but I
> don't know which exact CPU it is.

Maybe I ran into that on one of the two older AMD systems I routinely
test on every once in a while?

> If we're going to put a serialising instruction, it should be a write to
> CR2.  We don't care about 486 compatibility, and it's faster than CPUID
> and much much faster if virtualised because it's unlikely to be
> intercepted even under shadow paging.
> 
> But, it would be nice not to put serialisation in the general case to
> begin with, especially not into the livepatching case.

If you're aware of an erratum there, how can we get away without any
serialization? I can surely switch to a CR2 write, and I can also make
this dependent upon system_state (thus excluding the LP case).

I notice that arch_livepatch_{apply,revert}() indeed use plain memcpy()
with just the noinline "protection". I wonder how well that works if a
livepatch actually touched the tail of either of these functions
(however unlikely that may be).

Jan
diff mbox series

Patch

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -48,6 +48,7 @@  obj-$(CONFIG_INDIRECT_THUNK) += indirect
 obj-$(CONFIG_PV) += ioport_emulate.o
 obj-y += irq.o
 obj-$(CONFIG_KEXEC) += machine_kexec.o
+obj-y += memcpy.o
 obj-y += memset.o
 obj-y += mm.o x86_64/mm.o
 obj-$(CONFIG_HVM) += monitor.o
--- a/xen/arch/x86/alternative.c
+++ b/xen/arch/x86/alternative.c
@@ -153,12 +153,14 @@  void init_or_livepatch add_nops(void *in
  * executing.
  *
  * "noinline" to cause control flow change and thus invalidate I$ and
- * cause refetch after modification.
+ * cause refetch after modification.  While the SDM continues to suggest this
+ * is sufficient, it may not be - issue a serializing insn afterwards as well.
  */
 static void init_or_livepatch noinline
 text_poke(void *addr, const void *opcode, size_t len)
 {
     memcpy(addr, opcode, len);
+    cpuid_eax(0);
 }
 
 extern void *const __initdata_cf_clobber_start[];
--- /dev/null
+++ b/xen/arch/x86/memcpy.S
@@ -0,0 +1,20 @@ 
+#include <asm/asm_defns.h>
+
+FUNC(memcpy)
+        mov     %rdx, %rcx
+        mov     %rdi, %rax
+        /*
+         * We need to be careful here: memcpy() is involved in alternatives
+         * patching, so the code doing the actual copying (i.e. past setting
+         * up registers) may not be subject to patching (unless further
+         * precautions were taken).
+         */
+        ALTERNATIVE "and $7, %edx; shr $3, %rcx", \
+                    "rep movsb; ret", X86_FEATURE_ERMS
+        rep movsq
+        or      %edx, %ecx
+        jz      1f
+        rep movsb
+1:
+        ret
+END(memcpy)
--- a/xen/arch/x86/string.c
+++ b/xen/arch/x86/string.c
@@ -7,21 +7,6 @@ 
 
 #include <xen/lib.h>
 
-void *(memcpy)(void *dest, const void *src, size_t n)
-{
-    long d0, d1, d2;
-
-    asm volatile (
-        "   rep ; movs"__OS" ; "
-        "   mov %k4,%k3      ; "
-        "   rep ; movsb        "
-        : "=&c" (d0), "=&D" (d1), "=&S" (d2)
-        : "0" (n/BYTES_PER_LONG), "r" (n%BYTES_PER_LONG), "1" (dest), "2" (src)
-        : "memory" );
-
-    return dest;
-}
-
 void *(memmove)(void *dest, const void *src, size_t n)
 {
     long d0, d1, d2;