Message ID | 20180904181629.20712-3-keescook@chromium.org (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Herbert Xu |
Headers | show |
Series | crypto: Remove VLA usage from skcipher | expand |
On Tuesday, September 4, 2018, 8:16:29 PM CEST Kees Cook wrote: > In the quest to remove all stack VLA usage from the kernel[1], this > caps the skcipher request size similar to other limits and adds a sanity > check at registration. Looking at instrumented tcrypt output, the largest > is for lrw: > > crypt: testing lrw(aes) > crypto_skcipher_set_reqsize: 8 > crypto_skcipher_set_reqsize: 88 > crypto_skcipher_set_reqsize: 472 > > [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com > > Signed-off-by: Kees Cook <keescook@chromium.org> > --- > include/crypto/internal/skcipher.h | 3 +++ > include/crypto/skcipher.h | 4 +++- > 2 files changed, 6 insertions(+), 1 deletion(-) > > diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h > index d2926ecae2ac..6da811c0747e 100644 > --- a/include/crypto/internal/skcipher.h > +++ b/include/crypto/internal/skcipher.h > @@ -130,6 +130,9 @@ static inline struct crypto_skcipher *crypto_spawn_skcipher( > static inline int crypto_skcipher_set_reqsize( > struct crypto_skcipher *skcipher, unsigned int reqsize) > { > + if (WARN_ON(reqsize > SKCIPHER_MAX_REQSIZE)) > + return -EINVAL; > + > skcipher->reqsize = reqsize; > > return 0; > diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h > index 2f327f090c3e..c48e194438cf 100644 > --- a/include/crypto/skcipher.h > +++ b/include/crypto/skcipher.h > @@ -139,9 +139,11 @@ struct skcipher_alg { > struct crypto_alg base; > }; > > +#define SKCIPHER_MAX_REQSIZE 472 > + > #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \ > char __##name##_desc[sizeof(struct skcipher_request) + \ > - crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \ > + SKCIPHER_MAX_REQSIZE] CRYPTO_MINALIGN_ATTR; \ > struct skcipher_request *name = (void *)__##name##_desc Now tfm could be removed from the macro arguments, no? Best regards, Alexander
On 4 September 2018 at 20:16, Kees Cook <keescook@chromium.org> wrote: > In the quest to remove all stack VLA usage from the kernel[1], this > caps the skcipher request size similar to other limits and adds a sanity > check at registration. Looking at instrumented tcrypt output, the largest > is for lrw: > > crypt: testing lrw(aes) > crypto_skcipher_set_reqsize: 8 > crypto_skcipher_set_reqsize: 88 > crypto_skcipher_set_reqsize: 472 > Are you sure this is a representative sampling? I haven't double checked myself, but we have plenty of drivers for peripherals in drivers/crypto that implement block ciphers, and they would not turn up in tcrypt unless you are running on a platform that provides the hardware in question. > [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com > > Signed-off-by: Kees Cook <keescook@chromium.org> > --- > include/crypto/internal/skcipher.h | 3 +++ > include/crypto/skcipher.h | 4 +++- > 2 files changed, 6 insertions(+), 1 deletion(-) > > diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h > index d2926ecae2ac..6da811c0747e 100644 > --- a/include/crypto/internal/skcipher.h > +++ b/include/crypto/internal/skcipher.h > @@ -130,6 +130,9 @@ static inline struct crypto_skcipher *crypto_spawn_skcipher( > static inline int crypto_skcipher_set_reqsize( > struct crypto_skcipher *skcipher, unsigned int reqsize) > { > + if (WARN_ON(reqsize > SKCIPHER_MAX_REQSIZE)) > + return -EINVAL; > + > skcipher->reqsize = reqsize; > > return 0; > diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h > index 2f327f090c3e..c48e194438cf 100644 > --- a/include/crypto/skcipher.h > +++ b/include/crypto/skcipher.h > @@ -139,9 +139,11 @@ struct skcipher_alg { > struct crypto_alg base; > }; > > +#define SKCIPHER_MAX_REQSIZE 472 > + > #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \ > char __##name##_desc[sizeof(struct skcipher_request) + \ > - crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \ > + SKCIPHER_MAX_REQSIZE] CRYPTO_MINALIGN_ATTR; \ > struct skcipher_request *name = (void *)__##name##_desc > > /** > -- > 2.17.1 >
On Wed, Sep 5, 2018 at 2:18 AM, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > On 4 September 2018 at 20:16, Kees Cook <keescook@chromium.org> wrote: >> In the quest to remove all stack VLA usage from the kernel[1], this >> caps the skcipher request size similar to other limits and adds a sanity >> check at registration. Looking at instrumented tcrypt output, the largest >> is for lrw: >> >> crypt: testing lrw(aes) >> crypto_skcipher_set_reqsize: 8 >> crypto_skcipher_set_reqsize: 88 >> crypto_skcipher_set_reqsize: 472 >> > > Are you sure this is a representative sampling? I haven't double > checked myself, but we have plenty of drivers for peripherals in > drivers/crypto that implement block ciphers, and they would not turn > up in tcrypt unless you are running on a platform that provides the > hardware in question. Hrm, excellent point. Looking at this again: The core part of the VLA is using this in the ON_STACK macro: static inline unsigned int crypto_skcipher_reqsize(struct crypto_skcipher *tfm) { return tfm->reqsize; } I don't find any struct crypto_skcipher .reqsize static initializers, and the initial reqsize is here: static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm) { ... skcipher->reqsize = crypto_ablkcipher_reqsize(ablkcipher) + sizeof(struct ablkcipher_request); with updates via crypto_skcipher_set_reqsize(). So I have to examine ablkcipher reqsize too: static inline unsigned int crypto_ablkcipher_reqsize( struct crypto_ablkcipher *tfm) { return crypto_ablkcipher_crt(tfm)->reqsize; } And of the crt_ablkcipher.reqsize assignments/initializers, I found: ablkcipher reqsize: 1 struct dcp_aes_req_ctx 8 struct atmel_tdes_reqctx 8 struct cryptd_blkcipher_request_ctx 8 struct mtk_aes_reqctx 8 struct omap_des_reqctx 8 struct s5p_aes_reqctx 8 struct sahara_aes_reqctx 8 struct stm32_cryp_reqctx 8 struct stm32_cryp_reqctx 16 struct ablk_ctx 24 struct atmel_aes_reqctx 48 struct omap_aes_reqctx 48 struct omap_aes_reqctx 48 struct qat_crypto_request 56 struct artpec6_crypto_request_context 64 struct chcr_blkcipher_req_ctx 80 struct spacc_req 80 struct virtio_crypto_sym_request 136 struct qce_cipher_reqctx 168 struct n2_request_context 328 struct ccp_des3_req_ctx 400 struct ccp_aes_req_ctx 536 struct hifn_request_context 992 struct cvm_req_ctx 2456 struct iproc_reqctx_s The base ablkcipher wrapper is: 80 struct ablkcipher_request And in my earlier skcipher wrapper analysis, lrw was the largest skcipher wrapper: 384 struct rctx iproc_reqctx_s is an extreme outlier, with cvm_req_ctx at a bit less than half. Making this a 2920 byte fixed array doesn't seem sensible at all (though that's what's already possible to use with existing SKCIPHER_REQUEST_ON_STACK users). What's the right path forward here? -Kees
On 5 September 2018 at 23:05, Kees Cook <keescook@chromium.org> wrote: > On Wed, Sep 5, 2018 at 2:18 AM, Ard Biesheuvel > <ard.biesheuvel@linaro.org> wrote: >> On 4 September 2018 at 20:16, Kees Cook <keescook@chromium.org> wrote: >>> In the quest to remove all stack VLA usage from the kernel[1], this >>> caps the skcipher request size similar to other limits and adds a sanity >>> check at registration. Looking at instrumented tcrypt output, the largest >>> is for lrw: >>> >>> crypt: testing lrw(aes) >>> crypto_skcipher_set_reqsize: 8 >>> crypto_skcipher_set_reqsize: 88 >>> crypto_skcipher_set_reqsize: 472 >>> >> >> Are you sure this is a representative sampling? I haven't double >> checked myself, but we have plenty of drivers for peripherals in >> drivers/crypto that implement block ciphers, and they would not turn >> up in tcrypt unless you are running on a platform that provides the >> hardware in question. > > Hrm, excellent point. Looking at this again: > > The core part of the VLA is using this in the ON_STACK macro: > > static inline unsigned int crypto_skcipher_reqsize(struct crypto_skcipher *tfm) > { > return tfm->reqsize; > } > > I don't find any struct crypto_skcipher .reqsize static initializers, > and the initial reqsize is here: > > static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm) > { > ... > skcipher->reqsize = crypto_ablkcipher_reqsize(ablkcipher) + > sizeof(struct ablkcipher_request); > > with updates via crypto_skcipher_set_reqsize(). > > So I have to examine ablkcipher reqsize too: > > static inline unsigned int crypto_ablkcipher_reqsize( > struct crypto_ablkcipher *tfm) > { > return crypto_ablkcipher_crt(tfm)->reqsize; > } > > And of the crt_ablkcipher.reqsize assignments/initializers, I found: > > ablkcipher reqsize: > 1 struct dcp_aes_req_ctx > 8 struct atmel_tdes_reqctx > 8 struct cryptd_blkcipher_request_ctx > 8 struct mtk_aes_reqctx > 8 struct omap_des_reqctx > 8 struct s5p_aes_reqctx > 8 struct sahara_aes_reqctx > 8 struct stm32_cryp_reqctx > 8 struct stm32_cryp_reqctx > 16 struct ablk_ctx > 24 struct atmel_aes_reqctx > 48 struct omap_aes_reqctx > 48 struct omap_aes_reqctx > 48 struct qat_crypto_request > 56 struct artpec6_crypto_request_context > 64 struct chcr_blkcipher_req_ctx > 80 struct spacc_req > 80 struct virtio_crypto_sym_request > 136 struct qce_cipher_reqctx > 168 struct n2_request_context > 328 struct ccp_des3_req_ctx > 400 struct ccp_aes_req_ctx > 536 struct hifn_request_context > 992 struct cvm_req_ctx > 2456 struct iproc_reqctx_s > > The base ablkcipher wrapper is: > 80 struct ablkcipher_request > > And in my earlier skcipher wrapper analysis, lrw was the largest > skcipher wrapper: > 384 struct rctx > > iproc_reqctx_s is an extreme outlier, with cvm_req_ctx at a bit less than half. > > Making this a 2920 byte fixed array doesn't seem sensible at all > (though that's what's already possible to use with existing > SKCIPHER_REQUEST_ON_STACK users). > > What's the right path forward here? > The skcipher implementations based on crypto IP blocks are typically asynchronous, and I wouldn't be surprised if a fair number of SKCIPHER_REQUEST_ON_STACK() users are limited to synchronous skciphers. So we could formalize this and limit SKCIPHER_REQUEST_ON_STACK() to synchronous skciphers, which implies that the reqsize limit only has to apply synchronous skciphers as well. But before we can do this, we have to identify the remaining occurrences that allow asynchronous skciphers to be used, and replace them with heap allocations.
On Wed, Sep 5, 2018 at 3:49 PM, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > On 5 September 2018 at 23:05, Kees Cook <keescook@chromium.org> wrote: >> On Wed, Sep 5, 2018 at 2:18 AM, Ard Biesheuvel >> <ard.biesheuvel@linaro.org> wrote: >>> On 4 September 2018 at 20:16, Kees Cook <keescook@chromium.org> wrote: >>>> In the quest to remove all stack VLA usage from the kernel[1], this >>>> caps the skcipher request size similar to other limits and adds a sanity >>>> check at registration. Looking at instrumented tcrypt output, the largest >>>> is for lrw: >>>> >>>> crypt: testing lrw(aes) >>>> crypto_skcipher_set_reqsize: 8 >>>> crypto_skcipher_set_reqsize: 88 >>>> crypto_skcipher_set_reqsize: 472 >>>> >>> >>> Are you sure this is a representative sampling? I haven't double >>> checked myself, but we have plenty of drivers for peripherals in >>> drivers/crypto that implement block ciphers, and they would not turn >>> up in tcrypt unless you are running on a platform that provides the >>> hardware in question. >> >> Hrm, excellent point. Looking at this again: >> >> The core part of the VLA is using this in the ON_STACK macro: >> >> static inline unsigned int crypto_skcipher_reqsize(struct crypto_skcipher *tfm) >> { >> return tfm->reqsize; >> } >> >> I don't find any struct crypto_skcipher .reqsize static initializers, >> and the initial reqsize is here: >> >> static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm) >> { >> ... >> skcipher->reqsize = crypto_ablkcipher_reqsize(ablkcipher) + >> sizeof(struct ablkcipher_request); >> >> with updates via crypto_skcipher_set_reqsize(). >> >> So I have to examine ablkcipher reqsize too: >> >> static inline unsigned int crypto_ablkcipher_reqsize( >> struct crypto_ablkcipher *tfm) >> { >> return crypto_ablkcipher_crt(tfm)->reqsize; >> } >> >> And of the crt_ablkcipher.reqsize assignments/initializers, I found: >> >> ablkcipher reqsize: >> 1 struct dcp_aes_req_ctx >> 8 struct atmel_tdes_reqctx >> 8 struct cryptd_blkcipher_request_ctx >> 8 struct mtk_aes_reqctx >> 8 struct omap_des_reqctx >> 8 struct s5p_aes_reqctx >> 8 struct sahara_aes_reqctx >> 8 struct stm32_cryp_reqctx >> 8 struct stm32_cryp_reqctx >> 16 struct ablk_ctx >> 24 struct atmel_aes_reqctx >> 48 struct omap_aes_reqctx >> 48 struct omap_aes_reqctx >> 48 struct qat_crypto_request >> 56 struct artpec6_crypto_request_context >> 64 struct chcr_blkcipher_req_ctx >> 80 struct spacc_req >> 80 struct virtio_crypto_sym_request >> 136 struct qce_cipher_reqctx >> 168 struct n2_request_context >> 328 struct ccp_des3_req_ctx >> 400 struct ccp_aes_req_ctx >> 536 struct hifn_request_context >> 992 struct cvm_req_ctx >> 2456 struct iproc_reqctx_s >> >> The base ablkcipher wrapper is: >> 80 struct ablkcipher_request >> >> And in my earlier skcipher wrapper analysis, lrw was the largest >> skcipher wrapper: >> 384 struct rctx >> >> iproc_reqctx_s is an extreme outlier, with cvm_req_ctx at a bit less than half. >> >> Making this a 2920 byte fixed array doesn't seem sensible at all >> (though that's what's already possible to use with existing >> SKCIPHER_REQUEST_ON_STACK users). >> >> What's the right path forward here? >> > > The skcipher implementations based on crypto IP blocks are typically > asynchronous, and I wouldn't be surprised if a fair number of > SKCIPHER_REQUEST_ON_STACK() users are limited to synchronous > skciphers. Looks similar to ahash vs shash. :) Yes, so nearly all crypto_alloc_skcipher() users explicitly mask away ASYNC. What's left appears to be: crypto/drbg.c: sk_tfm = crypto_alloc_skcipher(ctr_name, 0, 0); crypto/tcrypt.c: tfm = crypto_alloc_skcipher(algo, 0, async ? 0 : CRYPTO_ALG_ASYNC); drivers/crypto/omap-aes.c: ctx->ctr = crypto_alloc_skcipher("ecb(aes)", 0, 0); drivers/md/dm-crypt.c: cc->cipher_tfm.tfms[i] = crypto_alloc_skcipher(ciphermode, 0, 0); drivers/md/dm-integrity.c: ic->journal_crypt = crypto_alloc_skcipher(ic->journal_crypt_alg.alg_string, 0, 0); fs/crypto/keyinfo.c: struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0); fs/crypto/keyinfo.c: ctfm = crypto_alloc_skcipher(mode->cipher_str, 0, 0); fs/ecryptfs/crypto.c: crypt_stat->tfm = crypto_alloc_skcipher(full_alg_name, 0, 0); I'll cross-reference this with SKCIPHER_REQUEST_ON_STACK... > So we could formalize this and limit SKCIPHER_REQUEST_ON_STACK() to > synchronous skciphers, which implies that the reqsize limit only has > to apply synchronous skciphers as well. But before we can do this, we > have to identify the remaining occurrences that allow asynchronous > skciphers to be used, and replace them with heap allocations. Sounds good; thanks! -Kees
On Thu, Sep 6, 2018 at 1:49 AM, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > On 5 September 2018 at 23:05, Kees Cook <keescook@chromium.org> wrote: >> On Wed, Sep 5, 2018 at 2:18 AM, Ard Biesheuvel >> <ard.biesheuvel@linaro.org> wrote: >>> On 4 September 2018 at 20:16, Kees Cook <keescook@chromium.org> wrote: >>>> In the quest to remove all stack VLA usage from the kernel[1], this >>>> caps the skcipher request size similar to other limits and adds a sanity >>>> check at registration. Looking at instrumented tcrypt output, the largest >>>> is for lrw: >>>> >>>> crypt: testing lrw(aes) >>>> crypto_skcipher_set_reqsize: 8 >>>> crypto_skcipher_set_reqsize: 88 >>>> crypto_skcipher_set_reqsize: 472 >>>> >>> >>> Are you sure this is a representative sampling? I haven't double >>> checked myself, but we have plenty of drivers for peripherals in >>> drivers/crypto that implement block ciphers, and they would not turn >>> up in tcrypt unless you are running on a platform that provides the >>> hardware in question. >> >> Hrm, excellent point. Looking at this again: >> >> The core part of the VLA is using this in the ON_STACK macro: >> >> static inline unsigned int crypto_skcipher_reqsize(struct crypto_skcipher *tfm) >> { >> return tfm->reqsize; >> } >> >> I don't find any struct crypto_skcipher .reqsize static initializers, >> and the initial reqsize is here: >> >> static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm) >> { >> ... >> skcipher->reqsize = crypto_ablkcipher_reqsize(ablkcipher) + >> sizeof(struct ablkcipher_request); >> >> with updates via crypto_skcipher_set_reqsize(). >> >> So I have to examine ablkcipher reqsize too: >> >> static inline unsigned int crypto_ablkcipher_reqsize( >> struct crypto_ablkcipher *tfm) >> { >> return crypto_ablkcipher_crt(tfm)->reqsize; >> } >> >> And of the crt_ablkcipher.reqsize assignments/initializers, I found: >> >> ablkcipher reqsize: >> 1 struct dcp_aes_req_ctx >> 8 struct atmel_tdes_reqctx >> 8 struct cryptd_blkcipher_request_ctx >> 8 struct mtk_aes_reqctx >> 8 struct omap_des_reqctx >> 8 struct s5p_aes_reqctx >> 8 struct sahara_aes_reqctx >> 8 struct stm32_cryp_reqctx >> 8 struct stm32_cryp_reqctx >> 16 struct ablk_ctx >> 24 struct atmel_aes_reqctx >> 48 struct omap_aes_reqctx >> 48 struct omap_aes_reqctx >> 48 struct qat_crypto_request >> 56 struct artpec6_crypto_request_context >> 64 struct chcr_blkcipher_req_ctx >> 80 struct spacc_req >> 80 struct virtio_crypto_sym_request >> 136 struct qce_cipher_reqctx >> 168 struct n2_request_context >> 328 struct ccp_des3_req_ctx >> 400 struct ccp_aes_req_ctx >> 536 struct hifn_request_context >> 992 struct cvm_req_ctx >> 2456 struct iproc_reqctx_s >> >> The base ablkcipher wrapper is: >> 80 struct ablkcipher_request >> >> And in my earlier skcipher wrapper analysis, lrw was the largest >> skcipher wrapper: >> 384 struct rctx >> >> iproc_reqctx_s is an extreme outlier, with cvm_req_ctx at a bit less than half. >> >> Making this a 2920 byte fixed array doesn't seem sensible at all >> (though that's what's already possible to use with existing >> SKCIPHER_REQUEST_ON_STACK users). >> >> What's the right path forward here? >> > > The skcipher implementations based on crypto IP blocks are typically > asynchronous, and I wouldn't be surprised if a fair number of > SKCIPHER_REQUEST_ON_STACK() users are limited to synchronous > skciphers. According to Herbert, SKCIPHER_REQUEST_ON_STACK() may only be used for invoking synchronous ciphers. In fact, due to the way the crypto API is built, if you try using it with any transformation that uses DMA you would most probably end up trying to DMA to/from the stack which as we all know is not a great idea. > > So we could formalize this and limit SKCIPHER_REQUEST_ON_STACK() to > synchronous skciphers, which implies that the reqsize limit only has > to apply synchronous skciphers as well. But before we can do this, we > have to identify the remaining occurrences that allow asynchronous > skciphers to be used, and replace them with heap allocations. Any such occurrences are almost for sure broken already due to the DMA issue I've mentioned. Gilad
On 6 September 2018 at 06:53, Gilad Ben-Yossef <gilad@benyossef.com> wrote: > On Thu, Sep 6, 2018 at 1:49 AM, Ard Biesheuvel > <ard.biesheuvel@linaro.org> wrote: >> On 5 September 2018 at 23:05, Kees Cook <keescook@chromium.org> wrote: >>> On Wed, Sep 5, 2018 at 2:18 AM, Ard Biesheuvel >>> <ard.biesheuvel@linaro.org> wrote: >>>> On 4 September 2018 at 20:16, Kees Cook <keescook@chromium.org> wrote: >>>>> In the quest to remove all stack VLA usage from the kernel[1], this >>>>> caps the skcipher request size similar to other limits and adds a sanity >>>>> check at registration. Looking at instrumented tcrypt output, the largest >>>>> is for lrw: >>>>> >>>>> crypt: testing lrw(aes) >>>>> crypto_skcipher_set_reqsize: 8 >>>>> crypto_skcipher_set_reqsize: 88 >>>>> crypto_skcipher_set_reqsize: 472 >>>>> >>>> >>>> Are you sure this is a representative sampling? I haven't double >>>> checked myself, but we have plenty of drivers for peripherals in >>>> drivers/crypto that implement block ciphers, and they would not turn >>>> up in tcrypt unless you are running on a platform that provides the >>>> hardware in question. >>> >>> Hrm, excellent point. Looking at this again: >>> >>> The core part of the VLA is using this in the ON_STACK macro: >>> >>> static inline unsigned int crypto_skcipher_reqsize(struct crypto_skcipher *tfm) >>> { >>> return tfm->reqsize; >>> } >>> >>> I don't find any struct crypto_skcipher .reqsize static initializers, >>> and the initial reqsize is here: >>> >>> static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm) >>> { >>> ... >>> skcipher->reqsize = crypto_ablkcipher_reqsize(ablkcipher) + >>> sizeof(struct ablkcipher_request); >>> >>> with updates via crypto_skcipher_set_reqsize(). >>> >>> So I have to examine ablkcipher reqsize too: >>> >>> static inline unsigned int crypto_ablkcipher_reqsize( >>> struct crypto_ablkcipher *tfm) >>> { >>> return crypto_ablkcipher_crt(tfm)->reqsize; >>> } >>> >>> And of the crt_ablkcipher.reqsize assignments/initializers, I found: >>> >>> ablkcipher reqsize: >>> 1 struct dcp_aes_req_ctx >>> 8 struct atmel_tdes_reqctx >>> 8 struct cryptd_blkcipher_request_ctx >>> 8 struct mtk_aes_reqctx >>> 8 struct omap_des_reqctx >>> 8 struct s5p_aes_reqctx >>> 8 struct sahara_aes_reqctx >>> 8 struct stm32_cryp_reqctx >>> 8 struct stm32_cryp_reqctx >>> 16 struct ablk_ctx >>> 24 struct atmel_aes_reqctx >>> 48 struct omap_aes_reqctx >>> 48 struct omap_aes_reqctx >>> 48 struct qat_crypto_request >>> 56 struct artpec6_crypto_request_context >>> 64 struct chcr_blkcipher_req_ctx >>> 80 struct spacc_req >>> 80 struct virtio_crypto_sym_request >>> 136 struct qce_cipher_reqctx >>> 168 struct n2_request_context >>> 328 struct ccp_des3_req_ctx >>> 400 struct ccp_aes_req_ctx >>> 536 struct hifn_request_context >>> 992 struct cvm_req_ctx >>> 2456 struct iproc_reqctx_s >>> >>> The base ablkcipher wrapper is: >>> 80 struct ablkcipher_request >>> >>> And in my earlier skcipher wrapper analysis, lrw was the largest >>> skcipher wrapper: >>> 384 struct rctx >>> >>> iproc_reqctx_s is an extreme outlier, with cvm_req_ctx at a bit less than half. >>> >>> Making this a 2920 byte fixed array doesn't seem sensible at all >>> (though that's what's already possible to use with existing >>> SKCIPHER_REQUEST_ON_STACK users). >>> >>> What's the right path forward here? >>> >> >> The skcipher implementations based on crypto IP blocks are typically >> asynchronous, and I wouldn't be surprised if a fair number of >> SKCIPHER_REQUEST_ON_STACK() users are limited to synchronous >> skciphers. > > According to Herbert, SKCIPHER_REQUEST_ON_STACK() may only be used > for invoking synchronous ciphers. > > In fact, due to the way the crypto API is built, if you try using it > with any transformation that uses DMA > you would most probably end up trying to DMA to/from the stack which > as we all know is not a great idea. > Ah yes, I found [0] which contains that quote. So that means that Kees can disregard the occurrences that are async only, but it still implies that we cannot limit the reqsize like he proposes unless we take the sync/async nature into account. It also means we should probably BUG() or WARN() in SKCIPHER_REQUEST_ON_STACK() when used with an async algo. >> >> So we could formalize this and limit SKCIPHER_REQUEST_ON_STACK() to >> synchronous skciphers, which implies that the reqsize limit only has >> to apply synchronous skciphers as well. But before we can do this, we >> have to identify the remaining occurrences that allow asynchronous >> skciphers to be used, and replace them with heap allocations. > > Any such occurrences are almost for sure broken already due to the DMA > issue I've mentioned. > I am not convinced of this. The skcipher request struct does not contain any payload buffers, and whether the algo specific ctx struct is used for DMA is completely up to the driver. So I am quite sure there are plenty of async algos that work fine with SKCIPHER_REQUEST_ON_STACK() and vmapped stacks. > Gilad > > -- > Gilad Ben-Yossef > Chief Coffee Drinker > > values of β will give rise to dom! [0] https://www.redhat.com/archives/dm-devel/2018-January/msg00087.html
On 6 September 2018 at 09:21, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > On 6 September 2018 at 06:53, Gilad Ben-Yossef <gilad@benyossef.com> wrote: >> On Thu, Sep 6, 2018 at 1:49 AM, Ard Biesheuvel >> <ard.biesheuvel@linaro.org> wrote: >>> On 5 September 2018 at 23:05, Kees Cook <keescook@chromium.org> wrote: >>>> On Wed, Sep 5, 2018 at 2:18 AM, Ard Biesheuvel >>>> <ard.biesheuvel@linaro.org> wrote: >>>>> On 4 September 2018 at 20:16, Kees Cook <keescook@chromium.org> wrote: >>>>>> In the quest to remove all stack VLA usage from the kernel[1], this >>>>>> caps the skcipher request size similar to other limits and adds a sanity >>>>>> check at registration. Looking at instrumented tcrypt output, the largest >>>>>> is for lrw: >>>>>> >>>>>> crypt: testing lrw(aes) >>>>>> crypto_skcipher_set_reqsize: 8 >>>>>> crypto_skcipher_set_reqsize: 88 >>>>>> crypto_skcipher_set_reqsize: 472 >>>>>> >>>>> >>>>> Are you sure this is a representative sampling? I haven't double >>>>> checked myself, but we have plenty of drivers for peripherals in >>>>> drivers/crypto that implement block ciphers, and they would not turn >>>>> up in tcrypt unless you are running on a platform that provides the >>>>> hardware in question. >>>> >>>> Hrm, excellent point. Looking at this again: >>>> >>>> The core part of the VLA is using this in the ON_STACK macro: >>>> >>>> static inline unsigned int crypto_skcipher_reqsize(struct crypto_skcipher *tfm) >>>> { >>>> return tfm->reqsize; >>>> } >>>> >>>> I don't find any struct crypto_skcipher .reqsize static initializers, >>>> and the initial reqsize is here: >>>> >>>> static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm) >>>> { >>>> ... >>>> skcipher->reqsize = crypto_ablkcipher_reqsize(ablkcipher) + >>>> sizeof(struct ablkcipher_request); >>>> >>>> with updates via crypto_skcipher_set_reqsize(). >>>> >>>> So I have to examine ablkcipher reqsize too: >>>> >>>> static inline unsigned int crypto_ablkcipher_reqsize( >>>> struct crypto_ablkcipher *tfm) >>>> { >>>> return crypto_ablkcipher_crt(tfm)->reqsize; >>>> } >>>> >>>> And of the crt_ablkcipher.reqsize assignments/initializers, I found: >>>> >>>> ablkcipher reqsize: >>>> 1 struct dcp_aes_req_ctx >>>> 8 struct atmel_tdes_reqctx >>>> 8 struct cryptd_blkcipher_request_ctx >>>> 8 struct mtk_aes_reqctx >>>> 8 struct omap_des_reqctx >>>> 8 struct s5p_aes_reqctx >>>> 8 struct sahara_aes_reqctx >>>> 8 struct stm32_cryp_reqctx >>>> 8 struct stm32_cryp_reqctx >>>> 16 struct ablk_ctx >>>> 24 struct atmel_aes_reqctx >>>> 48 struct omap_aes_reqctx >>>> 48 struct omap_aes_reqctx >>>> 48 struct qat_crypto_request >>>> 56 struct artpec6_crypto_request_context >>>> 64 struct chcr_blkcipher_req_ctx >>>> 80 struct spacc_req >>>> 80 struct virtio_crypto_sym_request >>>> 136 struct qce_cipher_reqctx >>>> 168 struct n2_request_context >>>> 328 struct ccp_des3_req_ctx >>>> 400 struct ccp_aes_req_ctx >>>> 536 struct hifn_request_context >>>> 992 struct cvm_req_ctx >>>> 2456 struct iproc_reqctx_s >>>> >>>> The base ablkcipher wrapper is: >>>> 80 struct ablkcipher_request >>>> >>>> And in my earlier skcipher wrapper analysis, lrw was the largest >>>> skcipher wrapper: >>>> 384 struct rctx >>>> >>>> iproc_reqctx_s is an extreme outlier, with cvm_req_ctx at a bit less than half. >>>> >>>> Making this a 2920 byte fixed array doesn't seem sensible at all >>>> (though that's what's already possible to use with existing >>>> SKCIPHER_REQUEST_ON_STACK users). >>>> >>>> What's the right path forward here? >>>> >>> >>> The skcipher implementations based on crypto IP blocks are typically >>> asynchronous, and I wouldn't be surprised if a fair number of >>> SKCIPHER_REQUEST_ON_STACK() users are limited to synchronous >>> skciphers. >> >> According to Herbert, SKCIPHER_REQUEST_ON_STACK() may only be used >> for invoking synchronous ciphers. >> >> In fact, due to the way the crypto API is built, if you try using it >> with any transformation that uses DMA >> you would most probably end up trying to DMA to/from the stack which >> as we all know is not a great idea. >> > > Ah yes, I found [0] which contains that quote. > > So that means that Kees can disregard the occurrences that are async > only, but it still implies that we cannot limit the reqsize like he > proposes unless we take the sync/async nature into account. > It also means we should probably BUG() or WARN() in > SKCIPHER_REQUEST_ON_STACK() when used with an async algo. > Something like this should do the trick: diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h index 2f327f090c3e..70584e0f26bc 100644 --- a/include/crypto/skcipher.h +++ b/include/crypto/skcipher.h @@ -142,7 +142,9 @@ struct skcipher_alg { #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \ char __##name##_desc[sizeof(struct skcipher_request) + \ crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \ - struct skcipher_request *name = (void *)__##name##_desc + struct skcipher_request *name = WARN_ON( \ + crypto_skcipher_alg(tfm)->base.cra_flags & CRYPTO_ALG_ASYNC) \ + ? NULL : (void *)__##name##_desc /** * DOC: Symmetric Key Cipher API That way, we will almost certainly oops on a NULL pointer dereference right after, but we at least the stack corruption. >>> >>> So we could formalize this and limit SKCIPHER_REQUEST_ON_STACK() to >>> synchronous skciphers, which implies that the reqsize limit only has >>> to apply synchronous skciphers as well. But before we can do this, we >>> have to identify the remaining occurrences that allow asynchronous >>> skciphers to be used, and replace them with heap allocations. >> >> Any such occurrences are almost for sure broken already due to the DMA >> issue I've mentioned. >> > > I am not convinced of this. The skcipher request struct does not > contain any payload buffers, and whether the algo specific ctx struct > is used for DMA is completely up to the driver. So I am quite sure > there are plenty of async algos that work fine with > SKCIPHER_REQUEST_ON_STACK() and vmapped stacks. > >> Gilad >> >> -- >> Gilad Ben-Yossef >> Chief Coffee Drinker >> >> values of β will give rise to dom! > > [0] https://www.redhat.com/archives/dm-devel/2018-January/msg00087.html
On Thu, Sep 6, 2018 at 10:21 AM, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: >>> The skcipher implementations based on crypto IP blocks are typically >>> asynchronous, and I wouldn't be surprised if a fair number of >>> SKCIPHER_REQUEST_ON_STACK() users are limited to synchronous >>> skciphers. >> >> According to Herbert, SKCIPHER_REQUEST_ON_STACK() may only be used >> for invoking synchronous ciphers. >> >> In fact, due to the way the crypto API is built, if you try using it >> with any transformation that uses DMA >> you would most probably end up trying to DMA to/from the stack which >> as we all know is not a great idea. >> > > Ah yes, I found [0] which contains that quote. > > So that means that Kees can disregard the occurrences that are async > only, but it still implies that we cannot limit the reqsize like he > proposes unless we take the sync/async nature into account. > It also means we should probably BUG() or WARN() in > SKCIPHER_REQUEST_ON_STACK() when used with an async algo. > >>> >>> So we could formalize this and limit SKCIPHER_REQUEST_ON_STACK() to >>> synchronous skciphers, which implies that the reqsize limit only has >>> to apply synchronous skciphers as well. But before we can do this, we >>> have to identify the remaining occurrences that allow asynchronous >>> skciphers to be used, and replace them with heap allocations. >> >> Any such occurrences are almost for sure broken already due to the DMA >> issue I've mentioned. >> > > I am not convinced of this. The skcipher request struct does not > contain any payload buffers, and whether the algo specific ctx struct > is used for DMA is completely up to the driver. So I am quite sure > there are plenty of async algos that work fine with > SKCIPHER_REQUEST_ON_STACK() and vmapped stacks. You are right that it is up to the driver but the cost is an extra memory allocation and release *per request* for any per request data that needs to be DMAable beyond the actual plain and cipher text buffers such as the IV, so driver writers have an incentive against doing that :-) Gilad
On Thu, Sep 06, 2018 at 10:11:59AM +0200, Ard Biesheuvel wrote: > > That way, we will almost certainly oops on a NULL pointer dereference > right after, but we at least the stack corruption. A crash is just as bad as a BUG_ON. Is this even a real problem? Do we have any users of this construct that is using it on async algorithms? Cheers,
On 6 September 2018 at 10:51, Herbert Xu <herbert@gondor.apana.org.au> wrote: > On Thu, Sep 06, 2018 at 10:11:59AM +0200, Ard Biesheuvel wrote: >> >> That way, we will almost certainly oops on a NULL pointer dereference >> right after, but we at least the stack corruption. > > A crash is just as bad as a BUG_ON. > > Is this even a real problem? Do we have any users of this construct > that is using it on async algorithms? > Perhaps not, but it is not enforced atm. In any case, limiting the reqsize is going to break things, so that needs to occur based on the sync/async nature of the algo. That also means we'll corrupt the stack if we ever end up using SKCIPHER_REQUEST_ON_STACK() with an async algo whose reqsize is greater than the sync reqsize limit, so I do think some additional sanity check is appropriate.
On Thu, Sep 06, 2018 at 11:29:41AM +0200, Ard Biesheuvel wrote: > > Perhaps not, but it is not enforced atm. > > In any case, limiting the reqsize is going to break things, so that > needs to occur based on the sync/async nature of the algo. That also > means we'll corrupt the stack if we ever end up using > SKCIPHER_REQUEST_ON_STACK() with an async algo whose reqsize is > greater than the sync reqsize limit, so I do think some additional > sanity check is appropriate. I'd prefer compile-time based checks. Perhaps we can introduce a wrapper around crypto_skcipher, say crypto_skcipher_sync which could then be used by SKCIPHER_REQUEST_ON_STACK to ensure that only sync algorithms can use this construct. Cheers,
On 6 September 2018 at 15:11, Herbert Xu <herbert@gondor.apana.org.au> wrote: > On Thu, Sep 06, 2018 at 11:29:41AM +0200, Ard Biesheuvel wrote: >> >> Perhaps not, but it is not enforced atm. >> >> In any case, limiting the reqsize is going to break things, so that >> needs to occur based on the sync/async nature of the algo. That also >> means we'll corrupt the stack if we ever end up using >> SKCIPHER_REQUEST_ON_STACK() with an async algo whose reqsize is >> greater than the sync reqsize limit, so I do think some additional >> sanity check is appropriate. > > I'd prefer compile-time based checks. Perhaps we can introduce > a wrapper around crypto_skcipher, say crypto_skcipher_sync which > could then be used by SKCIPHER_REQUEST_ON_STACK to ensure that > only sync algorithms can use this construct. > That would require lots of changes in the callers, including ones that already take care to use sync algos only. How about we do something like the below instead? diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h index 2f327f090c3e..ace707d59cd9 100644 --- a/include/crypto/skcipher.h +++ b/include/crypto/skcipher.h @@ -19,6 +19,7 @@ /** * struct skcipher_request - Symmetric key cipher request + * @__onstack: 1 if the request was allocated by SKCIPHER_REQUEST_ON_STACK * @cryptlen: Number of bytes to encrypt or decrypt * @iv: Initialisation Vector * @src: Source SG list @@ -27,6 +28,7 @@ * @__ctx: Start of private context data */ struct skcipher_request { + unsigned char __onstack; unsigned int cryptlen; u8 *iv; @@ -141,7 +143,7 @@ struct skcipher_alg { #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \ char __##name##_desc[sizeof(struct skcipher_request) + \ - crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \ + crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR = { 1 }; \ struct skcipher_request *name = (void *)__##name##_desc /** @@ -437,6 +439,10 @@ static inline int crypto_skcipher_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + if (req->__onstack && + (crypto_skcipher_alg(tfm)->base.cra_flags & CRYPTO_ALG_ASYNC)) + return -EINVAL; + if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) return -ENOKEY; @@ -458,6 +464,10 @@ static inline int crypto_skcipher_decrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + if (req->__onstack && + (crypto_skcipher_alg(tfm)->base.cra_flags & CRYPTO_ALG_ASYNC)) + return -EINVAL; + if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) return -ENOKEY;
On Thu, Sep 6, 2018 at 7:49 AM, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > On 6 September 2018 at 15:11, Herbert Xu <herbert@gondor.apana.org.au> wrote: >> On Thu, Sep 06, 2018 at 11:29:41AM +0200, Ard Biesheuvel wrote: >>> >>> Perhaps not, but it is not enforced atm. >>> >>> In any case, limiting the reqsize is going to break things, so that >>> needs to occur based on the sync/async nature of the algo. That also >>> means we'll corrupt the stack if we ever end up using >>> SKCIPHER_REQUEST_ON_STACK() with an async algo whose reqsize is >>> greater than the sync reqsize limit, so I do think some additional >>> sanity check is appropriate. >> >> I'd prefer compile-time based checks. Perhaps we can introduce >> a wrapper around crypto_skcipher, say crypto_skcipher_sync which >> could then be used by SKCIPHER_REQUEST_ON_STACK to ensure that >> only sync algorithms can use this construct. >> > > That would require lots of changes in the callers, including ones that > already take care to use sync algos only. > > How about we do something like the below instead? Oh, I like this, thanks! -Kees
On Wed, Sep 5, 2018 at 5:43 PM, Kees Cook <keescook@chromium.org> wrote: > On Wed, Sep 5, 2018 at 3:49 PM, Ard Biesheuvel > <ard.biesheuvel@linaro.org> wrote: >> On 5 September 2018 at 23:05, Kees Cook <keescook@chromium.org> wrote: >>> On Wed, Sep 5, 2018 at 2:18 AM, Ard Biesheuvel >>> <ard.biesheuvel@linaro.org> wrote: >>>> On 4 September 2018 at 20:16, Kees Cook <keescook@chromium.org> wrote: >>>>> In the quest to remove all stack VLA usage from the kernel[1], this >>>>> caps the skcipher request size similar to other limits and adds a sanity >>>>> check at registration. Looking at instrumented tcrypt output, the largest >>>>> is for lrw: >>>>> >>>>> crypt: testing lrw(aes) >>>>> crypto_skcipher_set_reqsize: 8 >>>>> crypto_skcipher_set_reqsize: 88 >>>>> crypto_skcipher_set_reqsize: 472 >>>>> >>>> >>>> Are you sure this is a representative sampling? I haven't double >>>> checked myself, but we have plenty of drivers for peripherals in >>>> drivers/crypto that implement block ciphers, and they would not turn >>>> up in tcrypt unless you are running on a platform that provides the >>>> hardware in question. >>> >>> Hrm, excellent point. Looking at this again: >>> [...] >>> And of the crt_ablkcipher.reqsize assignments/initializers, I found: >>> >>> ablkcipher reqsize: >>> 1 struct dcp_aes_req_ctx >>> 8 struct atmel_tdes_reqctx >>> 8 struct cryptd_blkcipher_request_ctx >>> 8 struct mtk_aes_reqctx >>> 8 struct omap_des_reqctx >>> 8 struct s5p_aes_reqctx >>> 8 struct sahara_aes_reqctx >>> 8 struct stm32_cryp_reqctx >>> 8 struct stm32_cryp_reqctx >>> 16 struct ablk_ctx >>> 24 struct atmel_aes_reqctx >>> 48 struct omap_aes_reqctx >>> 48 struct omap_aes_reqctx >>> 48 struct qat_crypto_request >>> 56 struct artpec6_crypto_request_context >>> 64 struct chcr_blkcipher_req_ctx >>> 80 struct spacc_req >>> 80 struct virtio_crypto_sym_request >>> 136 struct qce_cipher_reqctx >>> 168 struct n2_request_context >>> 328 struct ccp_des3_req_ctx >>> 400 struct ccp_aes_req_ctx >>> 536 struct hifn_request_context >>> 992 struct cvm_req_ctx >>> 2456 struct iproc_reqctx_s All of these are ASYNC (they're all crt_ablkcipher), so IIUC, I can ignore them. >>> The base ablkcipher wrapper is: >>> 80 struct ablkcipher_request >>> >>> And in my earlier skcipher wrapper analysis, lrw was the largest >>> skcipher wrapper: >>> 384 struct rctx >>> >>> iproc_reqctx_s is an extreme outlier, with cvm_req_ctx at a bit less than half. >>> >>> Making this a 2920 byte fixed array doesn't seem sensible at all >>> (though that's what's already possible to use with existing >>> SKCIPHER_REQUEST_ON_STACK users). >>> >>> What's the right path forward here? >>> >> >> The skcipher implementations based on crypto IP blocks are typically >> asynchronous, and I wouldn't be surprised if a fair number of >> SKCIPHER_REQUEST_ON_STACK() users are limited to synchronous >> skciphers. > > Looks similar to ahash vs shash. :) Yes, so nearly all > crypto_alloc_skcipher() users explicitly mask away ASYNC. What's left > appears to be: > > crypto/drbg.c: sk_tfm = crypto_alloc_skcipher(ctr_name, 0, 0); > crypto/tcrypt.c: tfm = crypto_alloc_skcipher(algo, 0, async ? 0 > : CRYPTO_ALG_ASYNC); > drivers/crypto/omap-aes.c: ctx->ctr = > crypto_alloc_skcipher("ecb(aes)", 0, 0); > drivers/md/dm-crypt.c: cc->cipher_tfm.tfms[i] = > crypto_alloc_skcipher(ciphermode, 0, 0); > drivers/md/dm-integrity.c: ic->journal_crypt = > crypto_alloc_skcipher(ic->journal_crypt_alg.alg_string, 0, 0); > fs/crypto/keyinfo.c: struct crypto_skcipher *tfm = > crypto_alloc_skcipher("ecb(aes)", 0, 0); > fs/crypto/keyinfo.c: ctfm = crypto_alloc_skcipher(mode->cipher_str, 0, 0); > fs/ecryptfs/crypto.c: crypt_stat->tfm = > crypto_alloc_skcipher(full_alg_name, 0, 0); > > I'll cross-reference this with SKCIPHER_REQUEST_ON_STACK... None of these use SKCIPHER_REQUEST_ON_STACK that I can find. >> So we could formalize this and limit SKCIPHER_REQUEST_ON_STACK() to >> synchronous skciphers, which implies that the reqsize limit only has >> to apply synchronous skciphers as well. But before we can do this, we >> have to identify the remaining occurrences that allow asynchronous >> skciphers to be used, and replace them with heap allocations. > > Sounds good; thanks! crypto_init_skcipher_ops_blkcipher() doesn't touch reqsize at all, so the only places I can find it gets changed are with direct callers of crypto_skcipher_set_reqsize(), which, when wrapping a sync blkcipher start with a reqsize == 0. So, the remaining non-ASYNC callers ask for: 4 struct sun4i_cipher_req_ctx 96 struct crypto_rfc3686_req_ctx 375 sum: 160 crypto_skcipher_blocksize(cipher) (max) 152 struct crypto_cts_reqctx 63 align_mask (max) 384 struct rctx So, following your patch to encrypt/decrypt, I can add reqsize check there. How does this look, on top of your patch? --- a/include/crypto/skcipher.h +++ b/include/crypto/skcipher.h @@ -144,9 +144,10 @@ struct skcipher_alg { /* * This must only ever be used with synchronous algorithms. */ +#define MAX_SYNC_SKCIPHER_REQSIZE 384 #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \ char __##name##_desc[sizeof(struct skcipher_request) + \ - crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR = { 1 } \ + MAX_SYNC_SKCIPHER_REQSIZE] CRYPTO_MINALIGN_ATTR = { 1 } \ struct skcipher_request *name = (void *)__##name##_desc /** @@ -442,10 +443,14 @@ static inline int crypto_skcipher_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - if (req->__onstack && - WARN_ON(crypto_skcipher_alg(tfm)->base.cra_flags & - CRYPTO_ALG_ASYNC)) - return -EINVAL; + if (req->__onstack) { + if (WARN_ON(crypto_skcipher_alg(tfm)->base.cra_flags & + CRYPTO_ALG_ASYNC)) + return -EINVAL; + if (WARN_ON(crypto_skcipher_reqsize(tfm) > + MAX_SYNC_SKCIPHER_REQSIZE)) + return -ENOSPC; + } ...etc
On Thu, Sep 6, 2018 at 1:22 PM, Kees Cook <keescook@chromium.org> wrote: > On Wed, Sep 5, 2018 at 5:43 PM, Kees Cook <keescook@chromium.org> wrote: >> On Wed, Sep 5, 2018 at 3:49 PM, Ard Biesheuvel >> <ard.biesheuvel@linaro.org> wrote: >>> On 5 September 2018 at 23:05, Kees Cook <keescook@chromium.org> wrote: >>>> On Wed, Sep 5, 2018 at 2:18 AM, Ard Biesheuvel >>>> <ard.biesheuvel@linaro.org> wrote: >>>>> On 4 September 2018 at 20:16, Kees Cook <keescook@chromium.org> wrote: >>>>>> In the quest to remove all stack VLA usage from the kernel[1], this >>>>>> caps the skcipher request size similar to other limits and adds a sanity >>>>>> check at registration. Looking at instrumented tcrypt output, the largest >>>>>> is for lrw: >>>>>> >>>>>> crypt: testing lrw(aes) >>>>>> crypto_skcipher_set_reqsize: 8 >>>>>> crypto_skcipher_set_reqsize: 88 >>>>>> crypto_skcipher_set_reqsize: 472 >>>>>> >>>>> >>>>> Are you sure this is a representative sampling? I haven't double >>>>> checked myself, but we have plenty of drivers for peripherals in >>>>> drivers/crypto that implement block ciphers, and they would not turn >>>>> up in tcrypt unless you are running on a platform that provides the >>>>> hardware in question. >>>> >>>> Hrm, excellent point. Looking at this again: >>>> [...] >>>> And of the crt_ablkcipher.reqsize assignments/initializers, I found: >>>> >>>> ablkcipher reqsize: >>>> 1 struct dcp_aes_req_ctx >>>> 8 struct atmel_tdes_reqctx >>>> 8 struct cryptd_blkcipher_request_ctx >>>> 8 struct mtk_aes_reqctx >>>> 8 struct omap_des_reqctx >>>> 8 struct s5p_aes_reqctx >>>> 8 struct sahara_aes_reqctx >>>> 8 struct stm32_cryp_reqctx >>>> 8 struct stm32_cryp_reqctx >>>> 16 struct ablk_ctx >>>> 24 struct atmel_aes_reqctx >>>> 48 struct omap_aes_reqctx >>>> 48 struct omap_aes_reqctx >>>> 48 struct qat_crypto_request >>>> 56 struct artpec6_crypto_request_context >>>> 64 struct chcr_blkcipher_req_ctx >>>> 80 struct spacc_req >>>> 80 struct virtio_crypto_sym_request >>>> 136 struct qce_cipher_reqctx >>>> 168 struct n2_request_context >>>> 328 struct ccp_des3_req_ctx >>>> 400 struct ccp_aes_req_ctx >>>> 536 struct hifn_request_context >>>> 992 struct cvm_req_ctx >>>> 2456 struct iproc_reqctx_s > > All of these are ASYNC (they're all crt_ablkcipher), so IIUC, I can ignore them. > >>>> The base ablkcipher wrapper is: >>>> 80 struct ablkcipher_request >>>> >>>> And in my earlier skcipher wrapper analysis, lrw was the largest >>>> skcipher wrapper: >>>> 384 struct rctx >>>> >>>> iproc_reqctx_s is an extreme outlier, with cvm_req_ctx at a bit less than half. >>>> >>>> Making this a 2920 byte fixed array doesn't seem sensible at all >>>> (though that's what's already possible to use with existing >>>> SKCIPHER_REQUEST_ON_STACK users). >>>> >>>> What's the right path forward here? >>>> >>> >>> The skcipher implementations based on crypto IP blocks are typically >>> asynchronous, and I wouldn't be surprised if a fair number of >>> SKCIPHER_REQUEST_ON_STACK() users are limited to synchronous >>> skciphers. >> >> Looks similar to ahash vs shash. :) Yes, so nearly all >> crypto_alloc_skcipher() users explicitly mask away ASYNC. What's left >> appears to be: >> >> crypto/drbg.c: sk_tfm = crypto_alloc_skcipher(ctr_name, 0, 0); >> crypto/tcrypt.c: tfm = crypto_alloc_skcipher(algo, 0, async ? 0 >> : CRYPTO_ALG_ASYNC); >> drivers/crypto/omap-aes.c: ctx->ctr = >> crypto_alloc_skcipher("ecb(aes)", 0, 0); >> drivers/md/dm-crypt.c: cc->cipher_tfm.tfms[i] = >> crypto_alloc_skcipher(ciphermode, 0, 0); >> drivers/md/dm-integrity.c: ic->journal_crypt = >> crypto_alloc_skcipher(ic->journal_crypt_alg.alg_string, 0, 0); >> fs/crypto/keyinfo.c: struct crypto_skcipher *tfm = >> crypto_alloc_skcipher("ecb(aes)", 0, 0); >> fs/crypto/keyinfo.c: ctfm = crypto_alloc_skcipher(mode->cipher_str, 0, 0); >> fs/ecryptfs/crypto.c: crypt_stat->tfm = >> crypto_alloc_skcipher(full_alg_name, 0, 0); >> >> I'll cross-reference this with SKCIPHER_REQUEST_ON_STACK... > > None of these use SKCIPHER_REQUEST_ON_STACK that I can find. > >>> So we could formalize this and limit SKCIPHER_REQUEST_ON_STACK() to >>> synchronous skciphers, which implies that the reqsize limit only has >>> to apply synchronous skciphers as well. But before we can do this, we >>> have to identify the remaining occurrences that allow asynchronous >>> skciphers to be used, and replace them with heap allocations. >> >> Sounds good; thanks! > > crypto_init_skcipher_ops_blkcipher() doesn't touch reqsize at all, so > the only places I can find it gets changed are with direct callers of > crypto_skcipher_set_reqsize(), which, when wrapping a sync blkcipher > start with a reqsize == 0. So, the remaining non-ASYNC callers ask > for: > > 4 struct sun4i_cipher_req_ctx > 96 struct crypto_rfc3686_req_ctx > 375 sum: > 160 crypto_skcipher_blocksize(cipher) (max) > 152 struct crypto_cts_reqctx > 63 align_mask (max) > 384 struct rctx > > So, following your patch to encrypt/decrypt, I can add reqsize check > there. How does this look, on top of your patch? > > --- a/include/crypto/skcipher.h > +++ b/include/crypto/skcipher.h > @@ -144,9 +144,10 @@ struct skcipher_alg { > /* > * This must only ever be used with synchronous algorithms. > */ > +#define MAX_SYNC_SKCIPHER_REQSIZE 384 > #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \ > char __##name##_desc[sizeof(struct skcipher_request) + \ > - crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR = { 1 } \ > + MAX_SYNC_SKCIPHER_REQSIZE] CRYPTO_MINALIGN_ATTR = { 1 } \ > struct skcipher_request *name = (void *)__##name##_desc If the lack of named initializer is too ugly, we could do something crazy like: #define MAX_SYNC_SKCIPHER_REQSIZE 384 struct skcipher_request_on_stack { union { struct skcipher_request req; char bytes[sizeof(struct skcipher_request) + MAX_SYNC_SKCIPHER_REQSIZE]; }; }; /* * This must only ever be used with synchronous algorithms. */ #define SKCIPHER_REQUEST_ON_STACK(name) \ struct skcipher_request_on_stack __##name##_req = \ { req.__onstack = 1 }; \ struct skcipher_request *name = &(__##name##_req.req) -Kees
diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h index d2926ecae2ac..6da811c0747e 100644 --- a/include/crypto/internal/skcipher.h +++ b/include/crypto/internal/skcipher.h @@ -130,6 +130,9 @@ static inline struct crypto_skcipher *crypto_spawn_skcipher( static inline int crypto_skcipher_set_reqsize( struct crypto_skcipher *skcipher, unsigned int reqsize) { + if (WARN_ON(reqsize > SKCIPHER_MAX_REQSIZE)) + return -EINVAL; + skcipher->reqsize = reqsize; return 0; diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h index 2f327f090c3e..c48e194438cf 100644 --- a/include/crypto/skcipher.h +++ b/include/crypto/skcipher.h @@ -139,9 +139,11 @@ struct skcipher_alg { struct crypto_alg base; }; +#define SKCIPHER_MAX_REQSIZE 472 + #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \ char __##name##_desc[sizeof(struct skcipher_request) + \ - crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \ + SKCIPHER_MAX_REQSIZE] CRYPTO_MINALIGN_ATTR; \ struct skcipher_request *name = (void *)__##name##_desc /**
In the quest to remove all stack VLA usage from the kernel[1], this caps the skcipher request size similar to other limits and adds a sanity check at registration. Looking at instrumented tcrypt output, the largest is for lrw: crypt: testing lrw(aes) crypto_skcipher_set_reqsize: 8 crypto_skcipher_set_reqsize: 88 crypto_skcipher_set_reqsize: 472 [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> --- include/crypto/internal/skcipher.h | 3 +++ include/crypto/skcipher.h | 4 +++- 2 files changed, 6 insertions(+), 1 deletion(-)