Message ID | 20180629002843.31095-10-keescook@chromium.org (mailing list archive) |
---|---|
State | Not Applicable, archived |
Delegated to: | Mike Snitzer |
Headers | show |
On Thu, Jun 28, 2018 at 05:28:43PM -0700, Kees Cook wrote: > In the quest to remove all stack VLA usage from the kernel[1], this uses > the newly defined max alignment to perform unaligned hashing to avoid > VLAs, and drops the helper function while adding sanity checks on the > resulting buffer sizes. Additionally, the __aligned_largest macro is > removed since this helper was the only user. > > [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com > > Signed-off-by: Kees Cook <keescook@chromium.org> > --- > crypto/shash.c | 19 ++++++++----------- > include/linux/compiler-gcc.h | 1 - > 2 files changed, 8 insertions(+), 12 deletions(-) > > diff --git a/crypto/shash.c b/crypto/shash.c > index ab6902c6dae7..8081c5e03770 100644 > --- a/crypto/shash.c > +++ b/crypto/shash.c > @@ -73,13 +73,6 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key, > } > EXPORT_SYMBOL_GPL(crypto_shash_setkey); > > -static inline unsigned int shash_align_buffer_size(unsigned len, > - unsigned long mask) > -{ > - typedef u8 __aligned_largest u8_aligned; > - return len + (mask & ~(__alignof__(u8_aligned) - 1)); > -} > - > static int shash_update_unaligned(struct shash_desc *desc, const u8 *data, > unsigned int len) > { > @@ -88,11 +81,13 @@ static int shash_update_unaligned(struct shash_desc *desc, const u8 *data, > unsigned long alignmask = crypto_shash_alignmask(tfm); > unsigned int unaligned_len = alignmask + 1 - > ((unsigned long)data & alignmask); > - u8 ubuf[shash_align_buffer_size(unaligned_len, alignmask)] > - __aligned_largest; > + u8 ubuf[MAX_ALGAPI_ALIGNMASK + 1]; > u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); > int err; > > + if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf))) > + return -EINVAL; > + How is 'ubuf' guaranteed to be large enough? You removed the __aligned attribute, so 'ubuf' can have any alignment. So the aligned pointer 'buf' may be as high as '&ubuf[alignmask]'. Then, up to 'alignmask' bytes of data will be copied into 'buf'... resulting in up to '2 * alignmask' bytes needed in 'ubuf'. But you've only guaranteed 'alignmask + 1' bytes. > if (unaligned_len > len) > unaligned_len = len; > > @@ -124,11 +119,13 @@ static int shash_final_unaligned(struct shash_desc *desc, u8 *out) > unsigned long alignmask = crypto_shash_alignmask(tfm); > struct shash_alg *shash = crypto_shash_alg(tfm); > unsigned int ds = crypto_shash_digestsize(tfm); > - u8 ubuf[shash_align_buffer_size(ds, alignmask)] > - __aligned_largest; > + u8 ubuf[SHASH_MAX_DIGESTSIZE]; > u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); > int err; > > + if (WARN_ON(buf + ds > ubuf + sizeof(ubuf))) > + return -EINVAL; > + Similar problem here. Wouldn't 'ubuf' need to be of size 'alignmask + ds'? > err = shash->final(desc, buf); > if (err) > goto out; > diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h > index f1a7492a5cc8..1f1cdef36a82 100644 > --- a/include/linux/compiler-gcc.h > +++ b/include/linux/compiler-gcc.h > @@ -125,7 +125,6 @@ > */ > #define __pure __attribute__((pure)) > #define __aligned(x) __attribute__((aligned(x))) > -#define __aligned_largest __attribute__((aligned)) > #define __printf(a, b) __attribute__((format(printf, a, b))) > #define __scanf(a, b) __attribute__((format(scanf, a, b))) > #define __attribute_const__ __attribute__((__const__)) > -- > 2.17.1 > > -- > dm-devel mailing list > dm-devel@redhat.com > https://www.redhat.com/mailman/listinfo/dm-devel -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel
On Sat, Jun 30, 2018 at 12:03 AM, Eric Biggers <ebiggers3@gmail.com> wrote: > On Thu, Jun 28, 2018 at 05:28:43PM -0700, Kees Cook wrote: >> @@ -88,11 +81,13 @@ static int shash_update_unaligned(struct shash_desc *desc, const u8 *data, >> unsigned long alignmask = crypto_shash_alignmask(tfm); >> unsigned int unaligned_len = alignmask + 1 - >> ((unsigned long)data & alignmask); >> - u8 ubuf[shash_align_buffer_size(unaligned_len, alignmask)] >> - __aligned_largest; >> + u8 ubuf[MAX_ALGAPI_ALIGNMASK + 1]; >> u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); >> int err; >> >> + if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf))) >> + return -EINVAL; >> + > > How is 'ubuf' guaranteed to be large enough? You removed the __aligned > attribute, so 'ubuf' can have any alignment. So the aligned pointer 'buf' may > be as high as '&ubuf[alignmask]'. Then, up to 'alignmask' bytes of data will be > copied into 'buf'... resulting in up to '2 * alignmask' bytes needed in 'ubuf'. > But you've only guaranteed 'alignmask + 1' bytes. Hm, good point. Adding __aligned(MAX_ALGAPI_ALIGNMASK + 1) looks to fix this, yes? Also, if __aligned() is used here, can't PTR_ALIGN() be dropped? (I think you pointed this out earlier.) Also, is "unaligned_len" being calculated correctly? Let's say alignmask is 63. If data is binary ...111111, then unaligned_len will be 64 - 63 == 1, which is fine: we copy 1 byte out, bump the address by 1, and we're happily aligned to ...000000. If data is ...000000, then unaligned_len will be 64. But it should be 0. Shouldn't this be: unsigned int unaligned_len; unaligned_len = (unsigned long)data & alignmask; if (unaligned_len) unaligned_len = alignmask + 1 - unaligned_len; And then ubuf only needs to be MAX_ALGAPI_ALIGNMASK, without the +1? -Kees
On Sun, Jul 01, 2018 at 10:04:59AM -0700, Kees Cook wrote: > On Sat, Jun 30, 2018 at 12:03 AM, Eric Biggers <ebiggers3@gmail.com> wrote: > > On Thu, Jun 28, 2018 at 05:28:43PM -0700, Kees Cook wrote: > >> @@ -88,11 +81,13 @@ static int shash_update_unaligned(struct shash_desc *desc, const u8 *data, > >> unsigned long alignmask = crypto_shash_alignmask(tfm); > >> unsigned int unaligned_len = alignmask + 1 - > >> ((unsigned long)data & alignmask); > >> - u8 ubuf[shash_align_buffer_size(unaligned_len, alignmask)] > >> - __aligned_largest; > >> + u8 ubuf[MAX_ALGAPI_ALIGNMASK + 1]; > >> u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); > >> int err; > >> > >> + if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf))) > >> + return -EINVAL; > >> + > > > > How is 'ubuf' guaranteed to be large enough? You removed the __aligned > > attribute, so 'ubuf' can have any alignment. So the aligned pointer 'buf' may > > be as high as '&ubuf[alignmask]'. Then, up to 'alignmask' bytes of data will be > > copied into 'buf'... resulting in up to '2 * alignmask' bytes needed in 'ubuf'. > > But you've only guaranteed 'alignmask + 1' bytes. > > Hm, good point. Adding __aligned(MAX_ALGAPI_ALIGNMASK + 1) looks to > fix this, yes? > > Also, if __aligned() is used here, can't PTR_ALIGN() be dropped? (I > think you pointed this out earlier.) Sure, I'm just not sure whether __aligned() with such a large alignment is guaranteed to work on stack variables on all architectures. See e.g. https://patchwork.kernel.org/patch/9507697/. > > Also, is "unaligned_len" being calculated correctly? Let's say > alignmask is 63. If data is binary ...111111, then unaligned_len will > be 64 - 63 == 1, which is fine: we copy 1 byte out, bump the address > by 1, and we're happily aligned to ...000000. If data is ...000000, > then unaligned_len will be 64. But it should be 0. Shouldn't this be: > > unsigned int unaligned_len; > > unaligned_len = (unsigned long)data & alignmask; > if (unaligned_len) > unaligned_len = alignmask + 1 - unaligned_len; > > And then ubuf only needs to be MAX_ALGAPI_ALIGNMASK, without the +1? shash_update_unaligned() is only called when 'data & alignmask'. Similarly with shash_final_unaligned(). Though, calculating 'unaligned_len' could be simplified to unsigned int unaligned_len = -(unsigned long)data & alignmask; which works either way. - Eric -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel
On Sun, Jul 1, 2018 at 10:20 AM, Eric Biggers <ebiggers3@gmail.com> wrote: > On Sun, Jul 01, 2018 at 10:04:59AM -0700, Kees Cook wrote: >> On Sat, Jun 30, 2018 at 12:03 AM, Eric Biggers <ebiggers3@gmail.com> wrote: >> > On Thu, Jun 28, 2018 at 05:28:43PM -0700, Kees Cook wrote: >> >> @@ -88,11 +81,13 @@ static int shash_update_unaligned(struct shash_desc *desc, const u8 *data, >> >> unsigned long alignmask = crypto_shash_alignmask(tfm); >> >> unsigned int unaligned_len = alignmask + 1 - >> >> ((unsigned long)data & alignmask); >> >> - u8 ubuf[shash_align_buffer_size(unaligned_len, alignmask)] >> >> - __aligned_largest; >> >> + u8 ubuf[MAX_ALGAPI_ALIGNMASK + 1]; >> >> u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); >> >> int err; >> >> >> >> + if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf))) >> >> + return -EINVAL; >> >> + >> > >> > How is 'ubuf' guaranteed to be large enough? You removed the __aligned >> > attribute, so 'ubuf' can have any alignment. So the aligned pointer 'buf' may >> > be as high as '&ubuf[alignmask]'. Then, up to 'alignmask' bytes of data will be >> > copied into 'buf'... resulting in up to '2 * alignmask' bytes needed in 'ubuf'. >> > But you've only guaranteed 'alignmask + 1' bytes. >> >> Hm, good point. Adding __aligned(MAX_ALGAPI_ALIGNMASK + 1) looks to >> fix this, yes? >> >> Also, if __aligned() is used here, can't PTR_ALIGN() be dropped? (I >> think you pointed this out earlier.) > > Sure, I'm just not sure whether __aligned() with such a large alignment is > guaranteed to work on stack variables on all architectures. See e.g. > https://patchwork.kernel.org/patch/9507697/. That's terrible. :( That seems like a compiler bug, but okay. >> Also, is "unaligned_len" being calculated correctly? Let's say >> alignmask is 63. If data is binary ...111111, then unaligned_len will >> be 64 - 63 == 1, which is fine: we copy 1 byte out, bump the address >> by 1, and we're happily aligned to ...000000. If data is ...000000, >> then unaligned_len will be 64. But it should be 0. Shouldn't this be: >> >> unsigned int unaligned_len; >> >> unaligned_len = (unsigned long)data & alignmask; >> if (unaligned_len) >> unaligned_len = alignmask + 1 - unaligned_len; >> >> And then ubuf only needs to be MAX_ALGAPI_ALIGNMASK, without the +1? > > shash_update_unaligned() is only called when 'data & alignmask'. > Similarly with shash_final_unaligned(). Ah! I see that now. > Though, calculating 'unaligned_len' could be simplified to > > unsigned int unaligned_len = -(unsigned long)data & alignmask; > > which works either way. So, since we can't depend on __aligned() working, I'll just keep the PTR_ALIGN and add MAX_ALGAPI_ALIGNMASK to each array. That'll be less memory-efficient, but it'll actually get aligned correctly. -Kees
diff --git a/crypto/shash.c b/crypto/shash.c index ab6902c6dae7..8081c5e03770 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -73,13 +73,6 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key, } EXPORT_SYMBOL_GPL(crypto_shash_setkey); -static inline unsigned int shash_align_buffer_size(unsigned len, - unsigned long mask) -{ - typedef u8 __aligned_largest u8_aligned; - return len + (mask & ~(__alignof__(u8_aligned) - 1)); -} - static int shash_update_unaligned(struct shash_desc *desc, const u8 *data, unsigned int len) { @@ -88,11 +81,13 @@ static int shash_update_unaligned(struct shash_desc *desc, const u8 *data, unsigned long alignmask = crypto_shash_alignmask(tfm); unsigned int unaligned_len = alignmask + 1 - ((unsigned long)data & alignmask); - u8 ubuf[shash_align_buffer_size(unaligned_len, alignmask)] - __aligned_largest; + u8 ubuf[MAX_ALGAPI_ALIGNMASK + 1]; u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); int err; + if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf))) + return -EINVAL; + if (unaligned_len > len) unaligned_len = len; @@ -124,11 +119,13 @@ static int shash_final_unaligned(struct shash_desc *desc, u8 *out) unsigned long alignmask = crypto_shash_alignmask(tfm); struct shash_alg *shash = crypto_shash_alg(tfm); unsigned int ds = crypto_shash_digestsize(tfm); - u8 ubuf[shash_align_buffer_size(ds, alignmask)] - __aligned_largest; + u8 ubuf[SHASH_MAX_DIGESTSIZE]; u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); int err; + if (WARN_ON(buf + ds > ubuf + sizeof(ubuf))) + return -EINVAL; + err = shash->final(desc, buf); if (err) goto out; diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index f1a7492a5cc8..1f1cdef36a82 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -125,7 +125,6 @@ */ #define __pure __attribute__((pure)) #define __aligned(x) __attribute__((aligned(x))) -#define __aligned_largest __attribute__((aligned)) #define __printf(a, b) __attribute__((format(printf, a, b))) #define __scanf(a, b) __attribute__((format(scanf, a, b))) #define __attribute_const__ __attribute__((__const__))
In the quest to remove all stack VLA usage from the kernel[1], this uses the newly defined max alignment to perform unaligned hashing to avoid VLAs, and drops the helper function while adding sanity checks on the resulting buffer sizes. Additionally, the __aligned_largest macro is removed since this helper was the only user. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> --- crypto/shash.c | 19 ++++++++----------- include/linux/compiler-gcc.h | 1 - 2 files changed, 8 insertions(+), 12 deletions(-)