From patchwork Wed Sep 6 17:08:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sabrina Dubroca X-Patchwork-Id: 13375803 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4C9D610976 for ; Wed, 6 Sep 2023 17:08:55 +0000 (UTC) Received: from relay7-d.mail.gandi.net (relay7-d.mail.gandi.net [217.70.183.200]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6AD813E for ; Wed, 6 Sep 2023 10:08:53 -0700 (PDT) Received: by mail.gandi.net (Postfix) with ESMTPSA id 3222820002; Wed, 6 Sep 2023 17:08:51 +0000 (UTC) From: Sabrina Dubroca To: netdev@vger.kernel.org Cc: Sabrina Dubroca , Dave Watson , Jakub Kicinski , Vakul Garg , Boris Pismenny , John Fastabend Subject: [PATCH net 1/5] net: tls: handle -EBUSY on async encrypt/decrypt requests Date: Wed, 6 Sep 2023 19:08:31 +0200 Message-Id: <9681d1febfec295449a62300938ed2ae66983f28.1694018970.git.sd@queasysnail.net> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-GND-Sasl: sd@queasysnail.net X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL, SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Since we're setting the CRYPTO_TFM_REQ_MAY_BACKLOG flag on our requests to the crypto API, crypto_aead_{encrypt,decrypt} can return -EBUSY instead of -EINPROGRESS in valid situations. For example, when the cryptd queue for AESNI is full (easy to trigger with an artifically low cryptd.cryptd_max_cpu_qlen), requests will be enqueued to the backlog but still processed. In that case, the async callback will also be called twice: first with err == -EINPROGRESS, which it seems we can just ignore, then with err == 0. I've only tested this on AESNI with cryptd. Fixes: a54667f6728c ("tls: Add support for encryption using async offload accelerator") Fixes: 94524d8fc965 ("net/tls: Add support for async decryption of tls records") Signed-off-by: Sabrina Dubroca Reviewed-by: Jakub Kicinski --- net/tls/tls_sw.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 1ed4a611631f..4f3dd0403efb 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -196,6 +196,9 @@ static void tls_decrypt_done(void *data, int err) struct sock *sk; int aead_size; + if (err == -EINPROGRESS) + return; + aead_size = sizeof(*aead_req) + crypto_aead_reqsize(aead); aead_size = ALIGN(aead_size, __alignof__(*dctx)); dctx = (void *)((u8 *)aead_req + aead_size); @@ -261,7 +264,7 @@ static int tls_do_decryption(struct sock *sk, } ret = crypto_aead_decrypt(aead_req); - if (ret == -EINPROGRESS) { + if (ret == -EINPROGRESS || ret == -EBUSY) { if (darg->async) return 0; @@ -443,6 +446,9 @@ static void tls_encrypt_done(void *data, int err) struct sock *sk; int pending; + if (err == -EINPROGRESS) + return; + msg_en = &rec->msg_encrypted; sk = rec->sk; @@ -544,7 +550,7 @@ static int tls_do_encryption(struct sock *sk, atomic_inc(&ctx->encrypt_pending); rc = crypto_aead_encrypt(aead_req); - if (!rc || rc != -EINPROGRESS) { + if (!rc || (rc != -EINPROGRESS && rc != -EBUSY)) { atomic_dec(&ctx->encrypt_pending); sge->offset -= prot->prepend_size; sge->length += prot->prepend_size; @@ -552,7 +558,7 @@ static int tls_do_encryption(struct sock *sk, if (!rc) { WRITE_ONCE(rec->tx_ready, true); - } else if (rc != -EINPROGRESS) { + } else if (rc != -EINPROGRESS && rc != -EBUSY) { list_del(&rec->list); return rc; } @@ -779,7 +785,7 @@ static int tls_push_record(struct sock *sk, int flags, rc = tls_do_encryption(sk, tls_ctx, ctx, req, msg_pl->sg.size + prot->tail_size, i); if (rc < 0) { - if (rc != -EINPROGRESS) { + if (rc != -EINPROGRESS && rc != -EBUSY) { tls_err_abort(sk, -EBADMSG); if (split) { tls_ctx->pending_open_record_frags = true; @@ -990,7 +996,7 @@ static int tls_sw_sendmsg_locked(struct sock *sk, struct msghdr *msg, if (unlikely(msg->msg_controllen)) { ret = tls_process_cmsg(sk, msg, &record_type); if (ret) { - if (ret == -EINPROGRESS) + if (ret == -EINPROGRESS || ret == -EBUSY) num_async++; else if (ret != -EAGAIN) goto send_end; @@ -1071,7 +1077,7 @@ static int tls_sw_sendmsg_locked(struct sock *sk, struct msghdr *msg, record_type, &copied, msg->msg_flags); if (ret) { - if (ret == -EINPROGRESS) + if (ret == -EINPROGRESS || ret == -EBUSY) num_async++; else if (ret == -ENOMEM) goto wait_for_memory; @@ -1125,7 +1131,7 @@ static int tls_sw_sendmsg_locked(struct sock *sk, struct msghdr *msg, record_type, &copied, msg->msg_flags); if (ret) { - if (ret == -EINPROGRESS) + if (ret == -EINPROGRESS || ret == -EBUSY) num_async++; else if (ret == -ENOMEM) goto wait_for_memory; @@ -1248,6 +1254,7 @@ void tls_sw_splice_eof(struct socket *sock) goto unlock; retrying = true; goto retry; + case -EBUSY: case -EINPROGRESS: break; default: @@ -2106,7 +2113,7 @@ int tls_sw_recvmsg(struct sock *sk, __skb_queue_purge(&ctx->async_hold); if (ret) { - if (err >= 0 || err == -EINPROGRESS) + if (err >= 0 || err == -EINPROGRESS || err == -EBUSY) err = ret; decrypted = 0; goto end; From patchwork Wed Sep 6 17:08:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sabrina Dubroca X-Patchwork-Id: 13375804 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54EDE11185 for ; Wed, 6 Sep 2023 17:09:02 +0000 (UTC) Received: from relay7-d.mail.gandi.net (relay7-d.mail.gandi.net [IPv6:2001:4b98:dc4:8::227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F092E13E for ; Wed, 6 Sep 2023 10:09:00 -0700 (PDT) Received: by mail.gandi.net (Postfix) with ESMTPSA id 5DA1620004; Wed, 6 Sep 2023 17:08:58 +0000 (UTC) From: Sabrina Dubroca To: netdev@vger.kernel.org Cc: Sabrina Dubroca , Dave Watson , Jakub Kicinski , Vakul Garg , Boris Pismenny , John Fastabend Subject: [PATCH net 2/5] tls: fix use-after-free with partial reads and async decrypt Date: Wed, 6 Sep 2023 19:08:32 +0200 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-GND-Sasl: sd@queasysnail.net X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org tls_decrypt_sg doesn't take a reference on the pages from clear_skb, so the put_page() in tls_decrypt_done releases them, and we trigger a use-after-free in process_rx_list when we try to read from the partially-read skb. This can be seen with the recv_and_splice test case. Fixes: fd31f3996af2 ("tls: rx: decrypt into a fresh skb") Signed-off-by: Sabrina Dubroca --- net/tls/tls_sw.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 4f3dd0403efb..f23cceaceb36 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -63,6 +63,7 @@ struct tls_decrypt_ctx { u8 iv[MAX_IV_SIZE]; u8 aad[TLS_MAX_AAD_SIZE]; u8 tail; + bool put_outsg; struct scatterlist sg[]; }; @@ -221,7 +222,8 @@ static void tls_decrypt_done(void *data, int err) for_each_sg(sg_next(sgout), sg, UINT_MAX, pages) { if (!sg) break; - put_page(sg_page(sg)); + if (dctx->put_outsg) + put_page(sg_page(sg)); } } @@ -1549,6 +1551,8 @@ static int tls_decrypt_sg(struct sock *sk, struct iov_iter *out_iov, if (err < 0) goto exit_free; + dctx->put_outsg = false; + if (clear_skb) { sg_init_table(sgout, n_sgout); sg_set_buf(&sgout[0], dctx->aad, prot->aad_size); @@ -1558,6 +1562,8 @@ static int tls_decrypt_sg(struct sock *sk, struct iov_iter *out_iov, if (err < 0) goto exit_free; } else if (out_iov) { + dctx->put_outsg = true; + sg_init_table(sgout, n_sgout); sg_set_buf(&sgout[0], dctx->aad, prot->aad_size); From patchwork Wed Sep 6 17:08:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sabrina Dubroca X-Patchwork-Id: 13375805 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0307E11185 for ; Wed, 6 Sep 2023 17:09:06 +0000 (UTC) Received: from relay7-d.mail.gandi.net (relay7-d.mail.gandi.net [217.70.183.200]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA10C13E for ; Wed, 6 Sep 2023 10:09:05 -0700 (PDT) Received: by mail.gandi.net (Postfix) with ESMTPSA id 7796F2000A; Wed, 6 Sep 2023 17:09:03 +0000 (UTC) From: Sabrina Dubroca To: netdev@vger.kernel.org Cc: Sabrina Dubroca , Dave Watson , Jakub Kicinski , Vakul Garg , Boris Pismenny , John Fastabend Subject: [PATCH net 3/5] tls: fix returned read length with async !zc decrypt Date: Wed, 6 Sep 2023 19:08:33 +0200 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-GND-Sasl: sd@queasysnail.net X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL, SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org We can double-count the chunk size: - once via decrypted - once in async_copy_bytes, which then becomes part of process_rx_list's return value Subtract it from decrypted before adding in process_rx_list's return value. Fixes: 4d42cd6bc2ac ("tls: rx: fix return value for async crypto") Signed-off-by: Sabrina Dubroca --- This logic is so complex, in this series I'm just trying to make all the selftests pass at the same time :/ net/tls/tls_sw.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index f23cceaceb36..babbd43d41ed 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -2132,7 +2132,13 @@ int tls_sw_recvmsg(struct sock *sk, else err = process_rx_list(ctx, msg, &control, 0, async_copy_bytes, is_peek); - decrypted += max(err, 0); + + if (err > 0) { + /* decrypted already accounts for async_copy_bytes, + * we don't want to double-count + */ + decrypted += err - async_copy_bytes; + } } copied += decrypted; From patchwork Wed Sep 6 17:08:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sabrina Dubroca X-Patchwork-Id: 13375806 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D5A6AEDB for ; Wed, 6 Sep 2023 17:09:12 +0000 (UTC) Received: from relay7-d.mail.gandi.net (relay7-d.mail.gandi.net [217.70.183.200]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD1171998 for ; Wed, 6 Sep 2023 10:09:11 -0700 (PDT) Received: by mail.gandi.net (Postfix) with ESMTPSA id 6983620007; Wed, 6 Sep 2023 17:09:09 +0000 (UTC) From: Sabrina Dubroca To: netdev@vger.kernel.org Cc: Sabrina Dubroca , Dave Watson , Jakub Kicinski , Vakul Garg , Boris Pismenny , John Fastabend Subject: [PATCH net 4/5] tls: fix race condition in async decryption of corrupted records Date: Wed, 6 Sep 2023 19:08:34 +0200 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-GND-Sasl: sd@queasysnail.net X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL, SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org In case a corrupted record is decrypted asynchronously, the error can be reported in two ways: - via the completion's error mechanism - via sk->sk_err If all asynchronous decrypts finish before we reach the end of tls_sw_recvmsg, decrypt_pending will be 0 and we don't check the completion for errors. We should still check sk->sk_err, otherwise we'll miss the error and return garbage to userspace. This is visible in the bad_auth test case. Fixes: 94524d8fc965 ("net/tls: Add support for async decryption of tls records") Signed-off-by: Sabrina Dubroca --- net/tls/tls_sw.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index babbd43d41ed..f80a2ea1dd7e 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -2116,6 +2116,9 @@ int tls_sw_recvmsg(struct sock *sk, ret = 0; if (pending) ret = crypto_wait_req(-EINPROGRESS, &ctx->async_wait); + /* Crypto completion could have run before us, check sk_err */ + if (ret == 0) + ret = -sk->sk_err; __skb_queue_purge(&ctx->async_hold); if (ret) { From patchwork Wed Sep 6 17:08:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sabrina Dubroca X-Patchwork-Id: 13375807 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2BBE10953 for ; Wed, 6 Sep 2023 17:09:19 +0000 (UTC) Received: from relay7-d.mail.gandi.net (relay7-d.mail.gandi.net [IPv6:2001:4b98:dc4:8::227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CF5213E for ; Wed, 6 Sep 2023 10:09:18 -0700 (PDT) Received: by mail.gandi.net (Postfix) with ESMTPSA id 230032000C; Wed, 6 Sep 2023 17:09:15 +0000 (UTC) From: Sabrina Dubroca To: netdev@vger.kernel.org Cc: Sabrina Dubroca , Dave Watson , Jakub Kicinski , Vakul Garg , Boris Pismenny , John Fastabend Subject: [PATCH net 5/5] tls: don't decrypt the next record if it's of a different type Date: Wed, 6 Sep 2023 19:08:35 +0200 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-GND-Sasl: sd@queasysnail.net X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org If the next record is of a different type, we won't copy it to userspace in this round, tls_record_content_type will stop us just after decryption. Skip decryption until the next recvmsg() call. This fixes a use-after-free when a data record is decrypted asynchronously but doesn't fill the userspace buffer, and the next record is non-data, for example in the bad_cmsg selftest. Fixes: c0ab4732d4c6 ("net/tls: Do not use async crypto for non-data records") Signed-off-by: Sabrina Dubroca --- net/tls/tls_sw.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index f80a2ea1dd7e..86b835b15872 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -2010,6 +2010,9 @@ int tls_sw_recvmsg(struct sock *sk, else darg.async = false; + if (ctx->async_capable && control && tlm->control != control) + goto recv_end; + err = tls_rx_one_record(sk, msg, &darg); if (err < 0) { tls_err_abort(sk, -EBADMSG);