Message ID | 20230505035127.195387-1-mpe@ellerman.id.au (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: kfence: Fix false positives on big endian | expand |
On Fri, May 5, 2023 at 5:51 AM Michael Ellerman <mpe@ellerman.id.au> wrote: > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of > __kfence_alloc() and __kfence_free()"), kfence reports failures in > random places at boot on big endian machines. > > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the > address of each byte in its value, so it needs to be byte swapped on big > endian machines. > > The compiler is smart enough to do the le64_to_cpu() at compile time, so > there is no runtime overhead. > > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of > __kfence_alloc() and __kfence_free()") > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> > Reviewed-by: Alexander Potapenko <glider@google.com>
On Fri, 5 May 2023 at 05:51, Michael Ellerman <mpe@ellerman.id.au> wrote: > > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of > __kfence_alloc() and __kfence_free()"), kfence reports failures in > random places at boot on big endian machines. > > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the > address of each byte in its value, so it needs to be byte swapped on big > endian machines. > > The compiler is smart enough to do the le64_to_cpu() at compile time, so > there is no runtime overhead. > > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()") > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Marco Elver <elver@google.com> Andrew, is the Fixes enough to make it to stable as well or do we also need Cc: stable? Thanks, -- Marco > --- > mm/kfence/kfence.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h > index 2aafc46a4aaf..392fb273e7bd 100644 > --- a/mm/kfence/kfence.h > +++ b/mm/kfence/kfence.h > @@ -29,7 +29,7 @@ > * canary of every 8 bytes is the same. 64-bit memory can be filled and checked > * at a time instead of byte by byte to improve performance. > */ > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100)) > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100))) > > /* Maximum stack depth for reports. */ > #define KFENCE_STACK_DEPTH 64 > -- > 2.40.1 >
Marco Elver <elver@google.com> writes: > On Fri, 5 May 2023 at 05:51, Michael Ellerman <mpe@ellerman.id.au> wrote: >> >> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of >> __kfence_alloc() and __kfence_free()"), kfence reports failures in >> random places at boot on big endian machines. >> >> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the >> address of each byte in its value, so it needs to be byte swapped on big >> endian machines. >> >> The compiler is smart enough to do the le64_to_cpu() at compile time, so >> there is no runtime overhead. >> >> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()") >> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> > > Reviewed-by: Marco Elver <elver@google.com> Thanks. > Andrew, is the Fixes enough to make it to stable as well or do we also > need Cc: stable? That commit is not in any releases yet (or even an rc), so as long as it gets picked up before v6.4 then it won't need to go to stable. cheers
From: Michael Ellerman > Sent: 05 May 2023 04:51 > > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of > __kfence_alloc() and __kfence_free()"), kfence reports failures in > random places at boot on big endian machines. > > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the > address of each byte in its value, so it needs to be byte swapped on big > endian machines. > > The compiler is smart enough to do the le64_to_cpu() at compile time, so > there is no runtime overhead. > > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()") > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> > --- > mm/kfence/kfence.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h > index 2aafc46a4aaf..392fb273e7bd 100644 > --- a/mm/kfence/kfence.h > +++ b/mm/kfence/kfence.h > @@ -29,7 +29,7 @@ > * canary of every 8 bytes is the same. 64-bit memory can be filled and checked > * at a time instead of byte by byte to improve performance. > */ > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100)) > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100))) What at the (u64) casts for? The constants should probably have a ul (or ull) suffix. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On Fri, 5 May 2023 16:02:17 +0000 David Laight <David.Laight@ACULAB.COM> wrote: > From: Michael Ellerman > > Sent: 05 May 2023 04:51 > > > > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of > > __kfence_alloc() and __kfence_free()"), kfence reports failures in > > random places at boot on big endian machines. > > > > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the > > address of each byte in its value, so it needs to be byte swapped on big > > endian machines. > > > > The compiler is smart enough to do the le64_to_cpu() at compile time, so > > there is no runtime overhead. > > > > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()") > > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> > > --- > > mm/kfence/kfence.h | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h > > index 2aafc46a4aaf..392fb273e7bd 100644 > > --- a/mm/kfence/kfence.h > > +++ b/mm/kfence/kfence.h > > @@ -29,7 +29,7 @@ > > * canary of every 8 bytes is the same. 64-bit memory can be filled and checked > > * at a time instead of byte by byte to improve performance. > > */ > > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100)) > > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100))) > > What at the (u64) casts for? > The constants should probably have a ul (or ull) suffix. > I tried that, didn't fix the sparse warnings described at https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com. Michael, have you looked into this? I'll merge it upstream - I guess we can live with the warnings for a while.
Andrew Morton <akpm@linux-foundation.org> writes: > On Fri, 5 May 2023 16:02:17 +0000 David Laight <David.Laight@ACULAB.COM> wrote: > >> From: Michael Ellerman >> > Sent: 05 May 2023 04:51 >> > >> > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of >> > __kfence_alloc() and __kfence_free()"), kfence reports failures in >> > random places at boot on big endian machines. >> > >> > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the >> > address of each byte in its value, so it needs to be byte swapped on big >> > endian machines. >> > >> > The compiler is smart enough to do the le64_to_cpu() at compile time, so >> > there is no runtime overhead. >> > >> > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()") >> > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> >> > --- >> > mm/kfence/kfence.h | 2 +- >> > 1 file changed, 1 insertion(+), 1 deletion(-) >> > >> > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h >> > index 2aafc46a4aaf..392fb273e7bd 100644 >> > --- a/mm/kfence/kfence.h >> > +++ b/mm/kfence/kfence.h >> > @@ -29,7 +29,7 @@ >> > * canary of every 8 bytes is the same. 64-bit memory can be filled and checked >> > * at a time instead of byte by byte to improve performance. >> > */ >> > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100)) >> > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100))) >> >> What at the (u64) casts for? >> The constants should probably have a ul (or ull) suffix. >> > > I tried that, didn't fix the sparse warnings described at > https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com. > > Michael, have you looked into this? I haven't sorry, been chasing other bugs. > I'll merge it upstream - I guess we can live with the warnings for a while. Thanks, yeah spurious WARNs are more of a pain than some sparse warnings. Maybe using le64_to_cpu() is too fancy, could just do it with an ifdef? eg. diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index 392fb273e7bd..510355a5382b 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -29,7 +29,11 @@ * canary of every 8 bytes is the same. 64-bit memory can be filled and checked * at a time instead of byte by byte to improve performance. */ -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100))) +#ifdef __LITTLE_ENDIAN__ +#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ 0x0706050403020100ULL) +#else +#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ 0x0001020304050607ULL) +#endif /* Maximum stack depth for reports. */ #define KFENCE_STACK_DEPTH 64 cheers
Le 18/05/2023 à 00:20, Andrew Morton a écrit : > On Fri, 5 May 2023 16:02:17 +0000 David Laight <David.Laight@ACULAB.COM> wrote: > >> From: Michael Ellerman >>> Sent: 05 May 2023 04:51 >>> >>> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of >>> __kfence_alloc() and __kfence_free()"), kfence reports failures in >>> random places at boot on big endian machines. >>> >>> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the >>> address of each byte in its value, so it needs to be byte swapped on big >>> endian machines. >>> >>> The compiler is smart enough to do the le64_to_cpu() at compile time, so >>> there is no runtime overhead. >>> >>> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()") >>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> >>> --- >>> mm/kfence/kfence.h | 2 +- >>> 1 file changed, 1 insertion(+), 1 deletion(-) >>> >>> diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h >>> index 2aafc46a4aaf..392fb273e7bd 100644 >>> --- a/mm/kfence/kfence.h >>> +++ b/mm/kfence/kfence.h >>> @@ -29,7 +29,7 @@ >>> * canary of every 8 bytes is the same. 64-bit memory can be filled and checked >>> * at a time instead of byte by byte to improve performance. >>> */ >>> -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100)) >>> +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100))) >> >> What at the (u64) casts for? >> The constants should probably have a ul (or ull) suffix. >> > > I tried that, didn't fix the sparse warnings described at > https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com. > > Michael, have you looked into this? > > I'll merge it upstream - I guess we can live with the warnings for a while. > sparse warning goes away with: #define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ le64_to_cpu((__force __le64)0x0706050403020100)) Christophe
On Fri, 2023-05-19 at 15:14 +1000, Michael Ellerman wrote: > Andrew Morton <akpm@linux-foundation.org> writes: > > On Fri, 5 May 2023 16:02:17 +0000 David Laight > > <David.Laight@ACULAB.COM> wrote: > > > > > From: Michael Ellerman > > > > Sent: 05 May 2023 04:51 > > > > > > > > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance > > > > of > > > > __kfence_alloc() and __kfence_free()"), kfence reports failures > > > > in > > > > random places at boot on big endian machines. > > > > > > > > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes > > > > the > > > > address of each byte in its value, so it needs to be byte > > > > swapped on big > > > > endian machines. > > > > > > > > The compiler is smart enough to do the le64_to_cpu() at compile > > > > time, so > > > > there is no runtime overhead. > > > > > > > > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of > > > > __kfence_alloc() and __kfence_free()") > > > > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> > > > > --- > > > > mm/kfence/kfence.h | 2 +- > > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > > > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h > > > > index 2aafc46a4aaf..392fb273e7bd 100644 > > > > --- a/mm/kfence/kfence.h > > > > +++ b/mm/kfence/kfence.h > > > > @@ -29,7 +29,7 @@ > > > > * canary of every 8 bytes is the same. 64-bit memory can be > > > > filled and checked > > > > * at a time instead of byte by byte to improve performance. > > > > */ > > > > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ > > > > (u64)(0x0706050403020100)) > > > > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ > > > > (u64)(le64_to_cpu(0x0706050403020100))) > > > > > > What at the (u64) casts for? > > > The constants should probably have a ul (or ull) suffix. > > > > > > > I tried that, didn't fix the sparse warnings described at > > https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com. > > > > Michael, have you looked into this? > > I haven't sorry, been chasing other bugs. > > > I'll merge it upstream - I guess we can live with the warnings for > > a while. > > Thanks, yeah spurious WARNs are more of a pain than some sparse > warnings. > > Maybe using le64_to_cpu() is too fancy, could just do it with an > ifdef? eg. > > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h > index 392fb273e7bd..510355a5382b 100644 > --- a/mm/kfence/kfence.h > +++ b/mm/kfence/kfence.h > @@ -29,7 +29,11 @@ > * canary of every 8 bytes is the same. 64-bit memory can be filled > and checked > * at a time instead of byte by byte to improve performance. > */ > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ > (u64)(le64_to_cpu(0x0706050403020100))) > +#ifdef __LITTLE_ENDIAN__ > +#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ > 0x0706050403020100ULL) > +#else > +#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ > 0x0001020304050607ULL) > +#endif > > /* Maximum stack depth for reports. */ > #define KFENCE_STACK_DEPTH 64 > > > cheers (for the sparse errors) As I understand, we require memory to look like "00 01 02 03 04 05 06 07" such that iterating byte-by-byte gives 00, 01, etc. (with everything XORed with aaa...) I think it would be most semantically correct to use cpu_to_le64 on KFENCE_CANARY_PATTERN_U64 and annotate the values being compared against it as __le64. This is because we want the integer literal 0x0706050403020100 to be stored as "00 01 02 03 04 05 06 07", which is the definition of little endian. Masking this with an #ifdef leaves the type as cpu endian, which could result in future issues. (or I've just misunderstood and can disregard this)
diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index 2aafc46a4aaf..392fb273e7bd 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -29,7 +29,7 @@ * canary of every 8 bytes is the same. 64-bit memory can be filled and checked * at a time instead of byte by byte to improve performance. */ -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100)) +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100))) /* Maximum stack depth for reports. */ #define KFENCE_STACK_DEPTH 64
Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()"), kfence reports failures in random places at boot on big endian machines. The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the address of each byte in its value, so it needs to be byte swapped on big endian machines. The compiler is smart enough to do the le64_to_cpu() at compile time, so there is no runtime overhead. Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> --- mm/kfence/kfence.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)