From patchwork Thu Mar 15 11:18:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Salvatore Mesoraca X-Patchwork-Id: 10284243 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A711260386 for ; Thu, 15 Mar 2018 11:19:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A3814288B2 for ; Thu, 15 Mar 2018 11:19:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 96C4628975; Thu, 15 Mar 2018 11:19:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id D791228979 for ; Thu, 15 Mar 2018 11:19:17 +0000 (UTC) Received: (qmail 11713 invoked by uid 550); 15 Mar 2018 11:19:15 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 11673 invoked from network); 15 Mar 2018 11:19:14 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=CCOraF1ENi0Qya1aqRcFbMsdYxqwQG8l0x5jyIxFoSE=; b=L094oNTWMhxfcDUPvIj8sU+ymCTHPSvhs8/F4/t4/ISYhGrdt/C96Oe3C9vDrvIMpH D5Xd7aMQmUz4LIluteswIuW+1Sanh9G6vjiKkli3nOBhTdKDAPVNsebGgph3ZOz1HO9A IcJ/QZxSnnRdMDy0P+ukjcH6Sm/KUlZIqHI3MwwGh7yTSWZrXMJmc/D5Yl02LrH+diCO 4buJX7d3ZbNGXkYN7OMHb+YXq93tMvEzd6bbjP7CsFmjOY2IKmmC+1Y2FGAeXS/4MGH8 NTBSfm98z7PL3MYbbwcY5TfcD3+3xF0NghSf8hGL8XtH+iflzFmj7RxIkcQmH/F5yqBB f+2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=CCOraF1ENi0Qya1aqRcFbMsdYxqwQG8l0x5jyIxFoSE=; b=OQ1dzgD1EuDuPDgywihFpfHXlTxdOZsR1p/76bZhkvDkDN3kjmTn/vmWm2tYsvqflz Wd1b1XfW03Yo9kejZsBaqw86k9WsRYwLoM6OOAJ9TqyBCyT/lKqTv24C7JILOTXOlZhv ra5ZgD0dVV+WJ9Fe0lLKhVrJ+pm/wExOXgqs9/QLczaCvgR711A1c5YnP5vsJID6JczF PynkCD5u/0y6fCmNdsmvUWcUxELsMnhEiynLqC7W2zCeaupshlz8R74pDfOmFXf/RFqM HFKJAPAC0f9iql66n5TbYqM/N+jlQuNksqclpSOs+7QrgMWwsiyAc5bVfl01al10ZrLn otkw== X-Gm-Message-State: AElRT7Gc6xOq0Y8Mve4c3AKXykrh24zjAtMJaVmmhOCcW4YK9bjI4/MT xwWcXsTNXVEICvppxbPdjcs= X-Google-Smtp-Source: AG47ELumB6tjbwDaFqRiSZ1YImuKdNQxLmCv2TkwqjrmVr1Mpx7NvEacHsYsII2dWKq1w3Q6yQ+NNA== X-Received: by 10.28.21.12 with SMTP id 12mr4334480wmv.56.1521112740843; Thu, 15 Mar 2018 04:19:00 -0700 (PDT) From: Salvatore Mesoraca To: linux-kernel@vger.kernel.org Cc: kernel-hardening@lists.openwall.com, linux-crypto@vger.kernel.org, "David S. Miller" , Herbert Xu , Kees Cook , Salvatore Mesoraca , Eric Biggers Subject: [PATCH v2] crypto: ctr - avoid VLA use Date: Thu, 15 Mar 2018 12:18:58 +0100 Message-Id: <1521112738-13250-1-git-send-email-s.mesoraca16@gmail.com> X-Mailer: git-send-email 1.9.1 X-Virus-Scanned: ClamAV using ClamSMTP All ciphers implemented in Linux have a block size less than or equal to 16 bytes and the most demanding hw require 16 bytes alignment for the block buffer. We avoid 2 VLAs[1] by always allocating 16 bytes with 16 bytes alignment, unless the architecture supports efficient unaligned accesses. We also check the selected cipher at instance creation time, if it doesn't comply with these limits, we fail the creation. [1] https://lkml.org/lkml/2018/3/7/621 Signed-off-by: Salvatore Mesoraca --- crypto/ctr.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/crypto/ctr.c b/crypto/ctr.c index 854d924..2c9f80f 100644 --- a/crypto/ctr.c +++ b/crypto/ctr.c @@ -21,6 +21,14 @@ #include #include +#define MAX_BLOCKSIZE 16 + +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS +#define MAX_ALIGNMASK 15 +#else +#define MAX_ALIGNMASK 0 +#endif + struct crypto_ctr_ctx { struct crypto_cipher *child; }; @@ -58,7 +66,7 @@ static void crypto_ctr_crypt_final(struct blkcipher_walk *walk, unsigned int bsize = crypto_cipher_blocksize(tfm); unsigned long alignmask = crypto_cipher_alignmask(tfm); u8 *ctrblk = walk->iv; - u8 tmp[bsize + alignmask]; + u8 tmp[MAX_BLOCKSIZE + MAX_ALIGNMASK]; u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1); u8 *src = walk->src.virt.addr; u8 *dst = walk->dst.virt.addr; @@ -106,7 +114,7 @@ static int crypto_ctr_crypt_inplace(struct blkcipher_walk *walk, unsigned int nbytes = walk->nbytes; u8 *ctrblk = walk->iv; u8 *src = walk->src.virt.addr; - u8 tmp[bsize + alignmask]; + u8 tmp[MAX_BLOCKSIZE + MAX_ALIGNMASK]; u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1); do { @@ -206,6 +214,14 @@ static struct crypto_instance *crypto_ctr_alloc(struct rtattr **tb) if (alg->cra_blocksize < 4) goto out_put_alg; + /* Block size must be <= MAX_BLOCKSIZE. */ + if (alg->cra_blocksize > MAX_BLOCKSIZE) + goto out_put_alg; + + /* Alignmask must be <= MAX_ALIGNMASK. */ + if (alg->cra_alignmask > MAX_ALIGNMASK) + goto out_put_alg; + /* If this is false we'd fail the alignment of crypto_inc. */ if (alg->cra_blocksize % 4) goto out_put_alg;