Message ID | 20220405135758.774016-2-catalin.marinas@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm, arm64: Reduce ARCH_KMALLOC_MINALIGN below the cache line size | expand |
On Tue, Apr 05, 2022 at 02:57:49PM +0100, Catalin Marinas wrote: > In preparation for supporting a dynamic kmalloc() minimum alignment, > allow architectures to define ARCH_KMALLOC_MINALIGN independently of > ARCH_DMA_MINALIGN. In addition, always define ARCH_DMA_MINALIGN even if > an architecture does not override it. > [ +Cc slab maintainer/reviewers ] I get why you want to set minimum alignment of kmalloc() dynamically. That's because cache line size can be different and we cannot statically know that, right? But I don't get why you are trying to decouple ARCH_KMALLOC_MINALIGN from ARCH_DMA_MINALIGN. kmalloc'ed buffer is always supposed to be DMA-safe. I'm afraid this series may break some archs/drivers. in Documentation/dma-api-howto.rst: > 2) ARCH_DMA_MINALIGN > > Architectures must ensure that kmalloc'ed buffer is > DMA-safe. Drivers and subsystems depend on it. If an architecture > isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in > the CPU cache is identical to data in main memory), > ARCH_DMA_MINALIGN must be set so that the memory allocator > makes sure that kmalloc'ed buffer doesn't share a cache line with > the others. See arch/arm/include/asm/cache.h as an example. > > Note that ARCH_DMA_MINALIGN is about DMA memory alignment > constraints. You don't need to worry about the architecture data > alignment constraints (e.g. the alignment constraints about 64-bit > objects). If I'm missing something, please let me know :) Thanks, Hyeonggon > After this patch, ARCH_DMA_MINALIGN is expected to be used in static > alignment annotations and defined by an architecture to be the maximum > alignment for all supported configurations/SoCs in a single Image. > ARCH_KMALLOC_MINALIGN, if different, is the minimum alignment guaranteed > by kmalloc(). > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > --- > include/linux/slab.h | 23 ++++++++++++++++++----- > 1 file changed, 18 insertions(+), 5 deletions(-) > > diff --git a/include/linux/slab.h b/include/linux/slab.h > index 373b3ef99f4e..d58211bdeceb 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -187,17 +187,30 @@ bool kmem_valid_obj(void *object); > void kmem_dump_obj(void *object); > #endif > > +/* > + * slob does not support independent control of ARCH_KMALLOC_MINALIGN and > + * ARCH_DMA_MINALIGN. > + */ > +#ifdef CONFIG_SLOB > +#undef ARCH_KMALLOC_MINALIGN > +#endif > + > /* > * Some archs want to perform DMA into kmalloc caches and need a guaranteed > * alignment larger than the alignment of a 64-bit integer. > - * Setting ARCH_KMALLOC_MINALIGN in arch headers allows that. > + * Setting ARCH_DMA_MINALIGN in arch headers allows that. > */ > -#if defined(ARCH_DMA_MINALIGN) && ARCH_DMA_MINALIGN > 8 > +#ifndef ARCH_DMA_MINALIGN > +#define ARCH_DMA_MINALIGN __alignof__(unsigned long long) > +#elif ARCH_DMA_MINALIGN > 8 && !defined(ARCH_KMALLOC_MINALIGN) > #define ARCH_KMALLOC_MINALIGN ARCH_DMA_MINALIGN > -#define KMALLOC_MIN_SIZE ARCH_DMA_MINALIGN > -#define KMALLOC_SHIFT_LOW ilog2(ARCH_DMA_MINALIGN) > -#else > +#endif > + > +#ifndef ARCH_KMALLOC_MINALIGN > #define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long) > +#else > +#define KMALLOC_MIN_SIZE ARCH_KMALLOC_MINALIGN > +#define KMALLOC_SHIFT_LOW ilog2(KMALLOC_MIN_SIZE) > #endif > > /* >
On Wed, Apr 6, 2022 at 1:59 AM Hyeonggon Yoo <42.hyeyoo@gmail.com> wrote: > > On Tue, Apr 05, 2022 at 02:57:49PM +0100, Catalin Marinas wrote: > > In preparation for supporting a dynamic kmalloc() minimum alignment, > > allow architectures to define ARCH_KMALLOC_MINALIGN independently of > > ARCH_DMA_MINALIGN. In addition, always define ARCH_DMA_MINALIGN even if > > an architecture does not override it. > > > > [ +Cc slab maintainer/reviewers ] > > I get why you want to set minimum alignment of kmalloc() dynamically. > That's because cache line size can be different and we cannot statically > know that, right? > > But I don't get why you are trying to decouple ARCH_KMALLOC_MINALIGN > from ARCH_DMA_MINALIGN. kmalloc'ed buffer is always supposed to be DMA-safe. > > I'm afraid this series may break some archs/drivers. > > in Documentation/dma-api-howto.rst: > > 2) ARCH_DMA_MINALIGN > > > > Architectures must ensure that kmalloc'ed buffer is > > DMA-safe. Drivers and subsystems depend on it. If an architecture > > isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in > > the CPU cache is identical to data in main memory), > > ARCH_DMA_MINALIGN must be set so that the memory allocator > > makes sure that kmalloc'ed buffer doesn't share a cache line with > > the others. See arch/arm/include/asm/cache.h as an example. > > > > Note that ARCH_DMA_MINALIGN is about DMA memory alignment > > constraints. You don't need to worry about the architecture data > > alignment constraints (e.g. the alignment constraints about 64-bit > > objects). > > If I'm missing something, please let me know :) It helps in two ways: - you can start with a relatively large hardcoded ARCH_DMA_MINALIGN of 128 or 256 bytes, depending on what the largest possible line size is for any machine you want to support, and then drop that down to 32 or 64 bytes based on runtime detection. This should always be safe, and it means a very sizable chunk of wasted memory can be recovered. - On systems that are fully cache coherent, there is no need to align kmallloc() allocations for DMA safety at all, on these, we can drop the size even below the cache line. This does not apply on most of the cheaper embedded or mobile SoCs, but it helps a lot on the machines you'd find in a data center. Arnd
On Wed, Apr 06, 2022 at 08:59:18AM +0900, Hyeonggon Yoo wrote: > On Tue, Apr 05, 2022 at 02:57:49PM +0100, Catalin Marinas wrote: > > In preparation for supporting a dynamic kmalloc() minimum alignment, > > allow architectures to define ARCH_KMALLOC_MINALIGN independently of > > ARCH_DMA_MINALIGN. In addition, always define ARCH_DMA_MINALIGN even if > > an architecture does not override it. > > [ +Cc slab maintainer/reviewers ] > > I get why you want to set minimum alignment of kmalloc() dynamically. > That's because cache line size can be different and we cannot statically > know that, right? > > But I don't get why you are trying to decouple ARCH_KMALLOC_MINALIGN > from ARCH_DMA_MINALIGN. kmalloc'ed buffer is always supposed to be DMA-safe. Arnd already replied. With this series, kmalloc'ed buffers are still DMA-safe for the SoC the kernel is running on.
On Wed, Apr 06, 2022 at 09:29:19AM +0200, Arnd Bergmann wrote: > On Wed, Apr 6, 2022 at 1:59 AM Hyeonggon Yoo <42.hyeyoo@gmail.com> wrote: > > > > On Tue, Apr 05, 2022 at 02:57:49PM +0100, Catalin Marinas wrote: > > > In preparation for supporting a dynamic kmalloc() minimum alignment, > > > allow architectures to define ARCH_KMALLOC_MINALIGN independently of > > > ARCH_DMA_MINALIGN. In addition, always define ARCH_DMA_MINALIGN even if > > > an architecture does not override it. > > > > > > > [ +Cc slab maintainer/reviewers ] > > > > I get why you want to set minimum alignment of kmalloc() dynamically. > > That's because cache line size can be different and we cannot statically > > know that, right? > > > > But I don't get why you are trying to decouple ARCH_KMALLOC_MINALIGN > > from ARCH_DMA_MINALIGN. kmalloc'ed buffer is always supposed to be DMA-safe. > > > > I'm afraid this series may break some archs/drivers. > > > > in Documentation/dma-api-howto.rst: > > > 2) ARCH_DMA_MINALIGN > > > > > > Architectures must ensure that kmalloc'ed buffer is > > > DMA-safe. Drivers and subsystems depend on it. If an architecture > > > isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in > > > the CPU cache is identical to data in main memory), > > > ARCH_DMA_MINALIGN must be set so that the memory allocator > > > makes sure that kmalloc'ed buffer doesn't share a cache line with > > > the others. See arch/arm/include/asm/cache.h as an example. > > > > > > Note that ARCH_DMA_MINALIGN is about DMA memory alignment > > > constraints. You don't need to worry about the architecture data > > > alignment constraints (e.g. the alignment constraints about 64-bit > > > objects). > > > > If I'm missing something, please let me know :) > > It helps in two ways: > > - you can start with a relatively large hardcoded ARCH_DMA_MINALIGN > of 128 or 256 bytes, depending on what the largest possible line size > is for any machine you want to support, and then drop that down to > 32 or 64 bytes based on runtime detection. This should always be safe, > and it means a very sizable chunk of wasted memory can be recovered. > I agree this part. > - On systems that are fully cache coherent, there is no need to align > kmallloc() allocations for DMA safety at all, on these, we can drop the > size even below the cache line. This does not apply on most of the > cheaper embedded or mobile SoCs, but it helps a lot on the machines > you'd find in a data center. Now I get the point. Thank you for explanation! Going to review this series soon. > > Arnd
On Tue, Apr 05, 2022 at 02:57:49PM +0100, Catalin Marinas wrote: > In preparation for supporting a dynamic kmalloc() minimum alignment, > allow architectures to define ARCH_KMALLOC_MINALIGN independently of > ARCH_DMA_MINALIGN. In addition, always define ARCH_DMA_MINALIGN even if > an architecture does not override it. > > After this patch, ARCH_DMA_MINALIGN is expected to be used in static > alignment annotations and defined by an architecture to be the maximum > alignment for all supported configurations/SoCs in a single Image. > ARCH_KMALLOC_MINALIGN, if different, is the minimum alignment guaranteed > by kmalloc(). > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > --- > include/linux/slab.h | 23 ++++++++++++++++++----- > 1 file changed, 18 insertions(+), 5 deletions(-) > > diff --git a/include/linux/slab.h b/include/linux/slab.h > index 373b3ef99f4e..d58211bdeceb 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -187,17 +187,30 @@ bool kmem_valid_obj(void *object); > void kmem_dump_obj(void *object); > #endif > > +/* > + * slob does not support independent control of ARCH_KMALLOC_MINALIGN and > + * ARCH_DMA_MINALIGN. > + */ > +#ifdef CONFIG_SLOB > +#undef ARCH_KMALLOC_MINALIGN > +#endif I think you should replace ARCH_KMALLOC_MINALIGN with ARCH_DMA_MINALIGN in mm/slob.c too? Or detect minimum kmalloc alignment in runtime like SLAB/SLUB? current code seem to break with SLOB on machines that has 128 byte cache lines because ARCH_KMALLOC_MINALIGN is 64? > + > /* > * Some archs want to perform DMA into kmalloc caches and need a guaranteed > * alignment larger than the alignment of a 64-bit integer. > - * Setting ARCH_KMALLOC_MINALIGN in arch headers allows that. > + * Setting ARCH_DMA_MINALIGN in arch headers allows that. > */ > -#if defined(ARCH_DMA_MINALIGN) && ARCH_DMA_MINALIGN > 8 > +#ifndef ARCH_DMA_MINALIGN > +#define ARCH_DMA_MINALIGN __alignof__(unsigned long long) > +#elif ARCH_DMA_MINALIGN > 8 && !defined(ARCH_KMALLOC_MINALIGN) > #define ARCH_KMALLOC_MINALIGN ARCH_DMA_MINALIGN > -#define KMALLOC_MIN_SIZE ARCH_DMA_MINALIGN > -#define KMALLOC_SHIFT_LOW ilog2(ARCH_DMA_MINALIGN) > -#else > +#endif > + > +#ifndef ARCH_KMALLOC_MINALIGN > #define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long) > +#else > +#define KMALLOC_MIN_SIZE ARCH_KMALLOC_MINALIGN > +#define KMALLOC_SHIFT_LOW ilog2(KMALLOC_MIN_SIZE) > #endif > > /* >
On Fri, Apr 08, 2022 at 03:42:13PM +0900, Hyeonggon Yoo wrote: > On Tue, Apr 05, 2022 at 02:57:49PM +0100, Catalin Marinas wrote: > > In preparation for supporting a dynamic kmalloc() minimum alignment, > > allow architectures to define ARCH_KMALLOC_MINALIGN independently of > > ARCH_DMA_MINALIGN. In addition, always define ARCH_DMA_MINALIGN even if > > an architecture does not override it. > > > > After this patch, ARCH_DMA_MINALIGN is expected to be used in static > > alignment annotations and defined by an architecture to be the maximum > > alignment for all supported configurations/SoCs in a single Image. > > ARCH_KMALLOC_MINALIGN, if different, is the minimum alignment guaranteed > > by kmalloc(). > > > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > > Cc: Andrew Morton <akpm@linux-foundation.org> > > --- > > include/linux/slab.h | 23 ++++++++++++++++++----- > > 1 file changed, 18 insertions(+), 5 deletions(-) > > > > diff --git a/include/linux/slab.h b/include/linux/slab.h > > index 373b3ef99f4e..d58211bdeceb 100644 > > --- a/include/linux/slab.h > > +++ b/include/linux/slab.h > > @@ -187,17 +187,30 @@ bool kmem_valid_obj(void *object); > > void kmem_dump_obj(void *object); > > #endif > > > > +/* > > + * slob does not support independent control of ARCH_KMALLOC_MINALIGN and > > + * ARCH_DMA_MINALIGN. > > + */ > > +#ifdef CONFIG_SLOB > > +#undef ARCH_KMALLOC_MINALIGN > > +#endif > Sorry for the noise. Yeah, the code above is making ARCH_KMALLOC_MINALIGN = ARCH_DMA_MINALIGN. I was confused :( > I think you should replace ARCH_KMALLOC_MINALIGN with ARCH_DMA_MINALIGN > in mm/slob.c too? Or detect minimum kmalloc alignment in runtime like SLAB/SLUB? > > current code seem to break with SLOB on machines that has 128 byte cache lines > because ARCH_KMALLOC_MINALIGN is 64? > > > + > > /* > > * Some archs want to perform DMA into kmalloc caches and need a guaranteed > > * alignment larger than the alignment of a 64-bit integer. > > - * Setting ARCH_KMALLOC_MINALIGN in arch headers allows that. > > + * Setting ARCH_DMA_MINALIGN in arch headers allows that. > > */ > > -#if defined(ARCH_DMA_MINALIGN) && ARCH_DMA_MINALIGN > 8 > > +#ifndef ARCH_DMA_MINALIGN > > +#define ARCH_DMA_MINALIGN __alignof__(unsigned long long) > > +#elif ARCH_DMA_MINALIGN > 8 && !defined(ARCH_KMALLOC_MINALIGN) > > #define ARCH_KMALLOC_MINALIGN ARCH_DMA_MINALIGN > > -#define KMALLOC_MIN_SIZE ARCH_DMA_MINALIGN > > -#define KMALLOC_SHIFT_LOW ilog2(ARCH_DMA_MINALIGN) > > -#else > > +#endif > > + > > +#ifndef ARCH_KMALLOC_MINALIGN > > #define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long) > > +#else > > +#define KMALLOC_MIN_SIZE ARCH_KMALLOC_MINALIGN > > +#define KMALLOC_SHIFT_LOW ilog2(KMALLOC_MIN_SIZE) > > #endif > > > > /* > > > > -- > Thanks, > Hyeonggon
On Fri, Apr 08, 2022 at 03:42:13PM +0900, Hyeonggon Yoo wrote: > On Tue, Apr 05, 2022 at 02:57:49PM +0100, Catalin Marinas wrote: > > diff --git a/include/linux/slab.h b/include/linux/slab.h > > index 373b3ef99f4e..d58211bdeceb 100644 > > --- a/include/linux/slab.h > > +++ b/include/linux/slab.h > > @@ -187,17 +187,30 @@ bool kmem_valid_obj(void *object); > > void kmem_dump_obj(void *object); > > #endif > > > > +/* > > + * slob does not support independent control of ARCH_KMALLOC_MINALIGN and > > + * ARCH_DMA_MINALIGN. > > + */ > > +#ifdef CONFIG_SLOB > > +#undef ARCH_KMALLOC_MINALIGN > > +#endif > > I think you should replace ARCH_KMALLOC_MINALIGN with ARCH_DMA_MINALIGN > in mm/slob.c too? Or detect minimum kmalloc alignment in runtime like SLAB/SLUB? One step at a time. The slob approach is a bit different, doesn't generate kmalloc-* caches, so I did not look at it yet. Also based on Vlastimil's email, there is some reworking going on in there already. > current code seem to break with SLOB on machines that has 128 byte cache lines > because ARCH_KMALLOC_MINALIGN is 64? Does it? The point of the #undef above was precisely to make sure ARCH_KMALLOC_MINALIGN stays the same as ARCH_DMA_MINALIGN when CONFIG_SLOB is enabled.
On Tue, Apr 05, 2022 at 02:57:49PM +0100, Catalin Marinas wrote: > In preparation for supporting a dynamic kmalloc() minimum alignment, > allow architectures to define ARCH_KMALLOC_MINALIGN independently of > ARCH_DMA_MINALIGN. In addition, always define ARCH_DMA_MINALIGN even if > an architecture does not override it. > > After this patch, ARCH_DMA_MINALIGN is expected to be used in static > alignment annotations and defined by an architecture to be the maximum > alignment for all supported configurations/SoCs in a single Image. > ARCH_KMALLOC_MINALIGN, if different, is the minimum alignment guaranteed > by kmalloc(). > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > --- > include/linux/slab.h | 23 ++++++++++++++++++----- > 1 file changed, 18 insertions(+), 5 deletions(-) > > diff --git a/include/linux/slab.h b/include/linux/slab.h > index 373b3ef99f4e..d58211bdeceb 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -187,17 +187,30 @@ bool kmem_valid_obj(void *object); > void kmem_dump_obj(void *object); > #endif > > +/* > + * slob does not support independent control of ARCH_KMALLOC_MINALIGN and > + * ARCH_DMA_MINALIGN. > + */ > +#ifdef CONFIG_SLOB > +#undef ARCH_KMALLOC_MINALIGN > +#endif > + > /* > * Some archs want to perform DMA into kmalloc caches and need a guaranteed > * alignment larger than the alignment of a 64-bit integer. > - * Setting ARCH_KMALLOC_MINALIGN in arch headers allows that. > + * Setting ARCH_DMA_MINALIGN in arch headers allows that. > */ > -#if defined(ARCH_DMA_MINALIGN) && ARCH_DMA_MINALIGN > 8 > +#ifndef ARCH_DMA_MINALIGN > +#define ARCH_DMA_MINALIGN __alignof__(unsigned long long) > +#elif ARCH_DMA_MINALIGN > 8 && !defined(ARCH_KMALLOC_MINALIGN) > #define ARCH_KMALLOC_MINALIGN ARCH_DMA_MINALIGN > -#define KMALLOC_MIN_SIZE ARCH_DMA_MINALIGN > -#define KMALLOC_SHIFT_LOW ilog2(ARCH_DMA_MINALIGN) > -#else > +#endif > + > +#ifndef ARCH_KMALLOC_MINALIGN > #define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long) > +#else > +#define KMALLOC_MIN_SIZE ARCH_KMALLOC_MINALIGN > +#define KMALLOC_SHIFT_LOW ilog2(KMALLOC_MIN_SIZE) > #endif > > /* > Sorry for the noise I made due to misunderstanding :) Now this patch looks good to me and I think it's worth adding. Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> and works fine with SLAB/SLOB/SLUB on my arm64 machine. Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Thanks!
On Mon, Apr 11, 2022 at 07:37:01PM +0900, Hyeonggon Yoo wrote: > On Tue, Apr 05, 2022 at 02:57:49PM +0100, Catalin Marinas wrote: > > In preparation for supporting a dynamic kmalloc() minimum alignment, > > allow architectures to define ARCH_KMALLOC_MINALIGN independently of > > ARCH_DMA_MINALIGN. In addition, always define ARCH_DMA_MINALIGN even if > > an architecture does not override it. > > > > After this patch, ARCH_DMA_MINALIGN is expected to be used in static > > alignment annotations and defined by an architecture to be the maximum > > alignment for all supported configurations/SoCs in a single Image. > > ARCH_KMALLOC_MINALIGN, if different, is the minimum alignment guaranteed > > by kmalloc(). > > > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > > Cc: Andrew Morton <akpm@linux-foundation.org> [...] > Sorry for the noise I made due to misunderstanding :) > Now this patch looks good to me and I think it's worth adding. > > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > > and works fine with SLAB/SLOB/SLUB on my arm64 machine. > > Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Thanks for the review and test. We still need to solve the potential crypto issues raised by Herbert before making the change.
diff --git a/include/linux/slab.h b/include/linux/slab.h index 373b3ef99f4e..d58211bdeceb 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -187,17 +187,30 @@ bool kmem_valid_obj(void *object); void kmem_dump_obj(void *object); #endif +/* + * slob does not support independent control of ARCH_KMALLOC_MINALIGN and + * ARCH_DMA_MINALIGN. + */ +#ifdef CONFIG_SLOB +#undef ARCH_KMALLOC_MINALIGN +#endif + /* * Some archs want to perform DMA into kmalloc caches and need a guaranteed * alignment larger than the alignment of a 64-bit integer. - * Setting ARCH_KMALLOC_MINALIGN in arch headers allows that. + * Setting ARCH_DMA_MINALIGN in arch headers allows that. */ -#if defined(ARCH_DMA_MINALIGN) && ARCH_DMA_MINALIGN > 8 +#ifndef ARCH_DMA_MINALIGN +#define ARCH_DMA_MINALIGN __alignof__(unsigned long long) +#elif ARCH_DMA_MINALIGN > 8 && !defined(ARCH_KMALLOC_MINALIGN) #define ARCH_KMALLOC_MINALIGN ARCH_DMA_MINALIGN -#define KMALLOC_MIN_SIZE ARCH_DMA_MINALIGN -#define KMALLOC_SHIFT_LOW ilog2(ARCH_DMA_MINALIGN) -#else +#endif + +#ifndef ARCH_KMALLOC_MINALIGN #define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long) +#else +#define KMALLOC_MIN_SIZE ARCH_KMALLOC_MINALIGN +#define KMALLOC_SHIFT_LOW ilog2(KMALLOC_MIN_SIZE) #endif /*
In preparation for supporting a dynamic kmalloc() minimum alignment, allow architectures to define ARCH_KMALLOC_MINALIGN independently of ARCH_DMA_MINALIGN. In addition, always define ARCH_DMA_MINALIGN even if an architecture does not override it. After this patch, ARCH_DMA_MINALIGN is expected to be used in static alignment annotations and defined by an architecture to be the maximum alignment for all supported configurations/SoCs in a single Image. ARCH_KMALLOC_MINALIGN, if different, is the minimum alignment guaranteed by kmalloc(). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> --- include/linux/slab.h | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-)