Message ID | 20180629002843.31095-5-keescook@chromium.org (mailing list archive) |
---|---|
State | Not Applicable, archived |
Delegated to: | Mike Snitzer |
Headers | show |
On Fri, Jun 29, 2018 at 2:28 AM, Kees Cook <keescook@chromium.org> wrote: > diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c > index 86438b2f10dd..85e8ce1625a2 100644 > --- a/drivers/md/dm-integrity.c > +++ b/drivers/md/dm-integrity.c > @@ -521,7 +521,12 @@ static void section_mac(struct dm_integrity_c *ic, unsigned section, __u8 result > } > memset(result + size, 0, JOURNAL_MAC_SIZE - size); > } else { > - __u8 digest[size]; > + __u8 digest[SHASH_MAX_DIGESTSIZE]; > + > + if (WARN_ON(size > sizeof(digest))) { > + dm_integrity_io_error(ic, "digest_size", -EINVAL); > + goto err; > + } I'm still slightly worried that some patches like this one could make things worse and lead to an actual stack overflow. You define SHASH_MAX_DIGESTSIZE as '512', which is still quite a lot to put on the kernel stack. The function also uses SHASH_DESC_ON_STACK(), so now you have two copies. Then you could call shash_final_unaligned(), which seems to put a third copy on the stack, so replacing each one with a fixed-size buffer adds quite a bit of bloat. Is there actually a digest that can be used in dm-integrity with more than 64 byte output (matching JOURNAL_MAC_SIZE) here? Arnd -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel
On Fri, Jun 29, 2018 at 1:43 PM, Arnd Bergmann <arnd@arndb.de> wrote: > On Fri, Jun 29, 2018 at 2:28 AM, Kees Cook <keescook@chromium.org> wrote: > >> diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c >> index 86438b2f10dd..85e8ce1625a2 100644 >> --- a/drivers/md/dm-integrity.c >> +++ b/drivers/md/dm-integrity.c >> @@ -521,7 +521,12 @@ static void section_mac(struct dm_integrity_c *ic, unsigned section, __u8 result >> } >> memset(result + size, 0, JOURNAL_MAC_SIZE - size); >> } else { >> - __u8 digest[size]; >> + __u8 digest[SHASH_MAX_DIGESTSIZE]; >> + >> + if (WARN_ON(size > sizeof(digest))) { >> + dm_integrity_io_error(ic, "digest_size", -EINVAL); >> + goto err; >> + } > > I'm still slightly worried that some patches like this one could make > things worse and lead to an actual stack overflow. As in stack exhaustion? Yeah, this has been a concern of mine for the crypto stuff because some combinations get BIG. My thinking has been mainly that it means ALL cases will lead to a bad state instead of only corner cases, which makes it easier to find and fix. > You define SHASH_MAX_DIGESTSIZE > as '512', which is still quite a lot to put on the kernel stack. The > function also > uses SHASH_DESC_ON_STACK(), so now you have two copies. Then you > could call shash_final_unaligned(), which seems to put a third copy on > the stack, > so replacing each one with a fixed-size buffer adds quite a bit of bloat. > > Is there actually a digest that can be used in dm-integrity with more than 64 > byte output (matching JOURNAL_MAC_SIZE) here? This conversion was following the existing check (PAGE_SIZE / 8), and not via an analysis of alg.digestsize users. Let me double-check. For predefined stuff, it looks like the largest is: SKEIN1024_DIGEST_BIT_SIZE/8 == 128 I can drop this from 512 down to 128... -Kees
On Fri, Jun 29, 2018 at 02:56:37PM -0700, Kees Cook wrote: > > This conversion was following the existing check (PAGE_SIZE / 8), and > not via an analysis of alg.digestsize users. Let me double-check. For > predefined stuff, it looks like the largest is: > > SKEIN1024_DIGEST_BIT_SIZE/8 == 128 This should be removed. We shouldn't allow generic or new crypto algorithms in staging. Thanks,
diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c index 86438b2f10dd..85e8ce1625a2 100644 --- a/drivers/md/dm-integrity.c +++ b/drivers/md/dm-integrity.c @@ -521,7 +521,12 @@ static void section_mac(struct dm_integrity_c *ic, unsigned section, __u8 result } memset(result + size, 0, JOURNAL_MAC_SIZE - size); } else { - __u8 digest[size]; + __u8 digest[SHASH_MAX_DIGESTSIZE]; + + if (WARN_ON(size > sizeof(digest))) { + dm_integrity_io_error(ic, "digest_size", -EINVAL); + goto err; + } r = crypto_shash_final(desc, digest); if (unlikely(r)) { dm_integrity_io_error(ic, "crypto_shash_final", r); @@ -1244,7 +1249,7 @@ static void integrity_metadata(struct work_struct *w) struct bio *bio = dm_bio_from_per_bio_data(dio, sizeof(struct dm_integrity_io)); char *checksums; unsigned extra_space = unlikely(digest_size > ic->tag_size) ? digest_size - ic->tag_size : 0; - char checksums_onstack[ic->tag_size + extra_space]; + char checksums_onstack[SHASH_MAX_DIGESTSIZE]; unsigned sectors_to_process = dio->range.n_sectors; sector_t sector = dio->range.logical_sector; @@ -1253,8 +1258,14 @@ static void integrity_metadata(struct work_struct *w) checksums = kmalloc((PAGE_SIZE >> SECTOR_SHIFT >> ic->sb->log2_sectors_per_block) * ic->tag_size + extra_space, GFP_NOIO | __GFP_NORETRY | __GFP_NOWARN); - if (!checksums) + if (!checksums) { checksums = checksums_onstack; + if (WARN_ON(extra_space && + digest_size > sizeof(checksums_onstack))) { + r = -EINVAL; + goto error; + } + } __bio_for_each_segment(bv, bio, iter, dio->orig_bi_iter) { unsigned pos; @@ -1466,7 +1477,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio, } while (++s < ic->sectors_per_block); #ifdef INTERNAL_VERIFY if (ic->internal_hash) { - char checksums_onstack[max(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)]; + char checksums_onstack[max(SHASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)]; integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack); if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) { @@ -1516,7 +1527,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio, if (ic->internal_hash) { unsigned digest_size = crypto_shash_digestsize(ic->internal_hash); if (unlikely(digest_size > ic->tag_size)) { - char checksums_onstack[digest_size]; + char checksums_onstack[SHASH_MAX_DIGESTSIZE]; integrity_sector_checksum(ic, logical_sector, (char *)js, checksums_onstack); memcpy(journal_entry_tag(ic, je), checksums_onstack, ic->tag_size); } else @@ -1937,7 +1948,7 @@ static void do_journal_write(struct dm_integrity_c *ic, unsigned write_start, unlikely(from_replay) && #endif ic->internal_hash) { - char test_tag[max(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)]; + char test_tag[max_t(size_t, SHASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)]; integrity_sector_checksum(ic, sec + ((l - j) << ic->sb->log2_sectors_per_block), (char *)access_journal_data(ic, i, l), test_tag);
In the quest to remove all stack VLA usage from the kernel[1], this uses the new SHASH_MAX_DIGESTSIZE from the crypto layer to allocate the upper bounds on stack usage. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> --- drivers/md/dm-integrity.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-)