From patchwork Mon Nov 9 10:51:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rohit Maheshwari X-Patchwork-Id: 11890961 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8C19C4741F for ; Mon, 9 Nov 2020 10:52:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 61ED1206E5 for ; Mon, 9 Nov 2020 10:52:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729364AbgKIKwl (ORCPT ); Mon, 9 Nov 2020 05:52:41 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:2640 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729228AbgKIKwk (ORCPT ); Mon, 9 Nov 2020 05:52:40 -0500 Received: from localhost.localdomain (redhouse.blr.asicdesigners.com [10.193.185.57]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id 0A9AqQMa011444; Mon, 9 Nov 2020 02:52:30 -0800 From: Rohit Maheshwari To: kuba@kernel.org, netdev@vger.kernel.org, davem@davemloft.net Cc: secdev@chelsio.com, Rohit Maheshwari Subject: [net v6 01/12] cxgb4/ch_ktls: decrypted bit is not enough Date: Mon, 9 Nov 2020 16:21:31 +0530 Message-Id: <20201109105142.15398-2-rohitm@chelsio.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20201109105142.15398-1-rohitm@chelsio.com> References: <20201109105142.15398-1-rohitm@chelsio.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org If skb has retransmit data starting before start marker, e.g. ccs, decrypted bit won't be set for that, and if it has some data to encrypt, then it must be given to crypto ULD. So in place of decrypted, check if socket is tls offloaded. Also, unless skb has some data to encrypt, no need to give it for tls offload handling. v2->v3: - Removed ifdef. Fixes: 5a4b9fe7fece ("cxgb4/chcr: complete record tx handling") Signed-off-by: Rohit Maheshwari --- drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c | 1 + drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h | 5 +++++ drivers/net/ethernet/chelsio/cxgb4/sge.c | 3 ++- .../net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 4 ---- 4 files changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c index a952fe198eb9..7fd264a6d085 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c @@ -1176,6 +1176,7 @@ static u16 cxgb_select_queue(struct net_device *dev, struct sk_buff *skb, txq = netdev_pick_tx(dev, skb, sb_dev); if (xfrm_offload(skb) || is_ptp_enabled(skb, dev) || skb->encapsulation || + cxgb4_is_ktls_skb(skb) || (proto != IPPROTO_TCP && proto != IPPROTO_UDP)) txq = txq % pi->nqsets; diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h index b169776ab484..e2a4941fa802 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h @@ -493,6 +493,11 @@ struct cxgb4_uld_info { #endif }; +static inline bool cxgb4_is_ktls_skb(struct sk_buff *skb) +{ + return skb->sk && tls_is_sk_tx_device_offloaded(skb->sk); +} + void cxgb4_uld_enable(struct adapter *adap); void cxgb4_register_uld(enum cxgb4_uld type, const struct cxgb4_uld_info *p); int cxgb4_unregister_uld(enum cxgb4_uld type); diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c index a9e9c7ae565d..01bd9c0dfe4e 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/sge.c +++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c @@ -1422,7 +1422,8 @@ static netdev_tx_t cxgb4_eth_xmit(struct sk_buff *skb, struct net_device *dev) #endif /* CHELSIO_IPSEC_INLINE */ #if IS_ENABLED(CONFIG_CHELSIO_TLS_DEVICE) - if (skb->decrypted) + if (cxgb4_is_ktls_skb(skb) && + (skb->len - (skb_transport_offset(skb) + tcp_hdrlen(skb)))) return adap->uld[CXGB4_ULD_KTLS].tx_handler(skb, dev); #endif /* CHELSIO_TLS_DEVICE */ diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c index 5195f692f14d..43c723c72c61 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c @@ -1878,10 +1878,6 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) mss = skb_is_gso(skb) ? skb_shinfo(skb)->gso_size : skb->data_len; - /* check if we haven't set it for ktls offload */ - if (!skb->sk || !tls_is_sk_tx_device_offloaded(skb->sk)) - goto out; - tls_ctx = tls_get_ctx(skb->sk); if (unlikely(tls_ctx->netdev != dev)) goto out; From patchwork Mon Nov 9 10:51:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rohit Maheshwari X-Patchwork-Id: 11890957 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BCB0C388F7 for ; Mon, 9 Nov 2020 10:52:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A9802080C for ; Mon, 9 Nov 2020 10:52:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729330AbgKIKwj (ORCPT ); Mon, 9 Nov 2020 05:52:39 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:23021 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729228AbgKIKwh (ORCPT ); Mon, 9 Nov 2020 05:52:37 -0500 Received: from localhost.localdomain (redhouse.blr.asicdesigners.com [10.193.185.57]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id 0A9AqQMb011444; Mon, 9 Nov 2020 02:52:32 -0800 From: Rohit Maheshwari To: kuba@kernel.org, netdev@vger.kernel.org, davem@davemloft.net Cc: secdev@chelsio.com, Rohit Maheshwari Subject: [net v6 02/12] ch_ktls: Correction in finding correct length Date: Mon, 9 Nov 2020 16:21:32 +0530 Message-Id: <20201109105142.15398-3-rohitm@chelsio.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20201109105142.15398-1-rohitm@chelsio.com> References: <20201109105142.15398-1-rohitm@chelsio.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org There is a possibility of linear skbs coming in. Correcting the length extraction logic. v2->v3: - Separated un-related changes from this patch. Fixes: 5a4b9fe7fece ("cxgb4/chcr: complete record tx handling") Signed-off-by: Rohit Maheshwari --- .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c index 43c723c72c61..447aec7ae954 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c @@ -967,7 +967,7 @@ chcr_ktls_write_tcp_options(struct chcr_ktls_info *tx_info, struct sk_buff *skb, /* packet length = eth hdr len + ip hdr len + tcp hdr len * (including options). */ - pktlen = skb->len - skb->data_len; + pktlen = skb_transport_offset(skb) + tcp_hdrlen(skb); ctrl = sizeof(*cpl) + pktlen; len16 = DIV_ROUND_UP(sizeof(*wr) + ctrl, 16); @@ -1860,6 +1860,7 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info, /* nic tls TX handler */ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) { + u32 tls_end_offset, tcp_seq, skb_data_len, skb_offset; struct ch_ktls_port_stats_debug *port_stats; struct chcr_ktls_ofld_ctx_tx *tx_ctx; struct ch_ktls_stats_debug *stats; @@ -1867,7 +1868,6 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) int data_len, qidx, ret = 0, mss; struct tls_record_info *record; struct chcr_ktls_info *tx_info; - u32 tls_end_offset, tcp_seq; struct tls_context *tls_ctx; struct sk_buff *local_skb; struct sge_eth_txq *q; @@ -1875,8 +1875,11 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) unsigned long flags; tcp_seq = ntohl(th->seq); + skb_offset = skb_transport_offset(skb) + tcp_hdrlen(skb); + skb_data_len = skb->len - skb_offset; + data_len = skb_data_len; - mss = skb_is_gso(skb) ? skb_shinfo(skb)->gso_size : skb->data_len; + mss = skb_is_gso(skb) ? skb_shinfo(skb)->gso_size : data_len; tls_ctx = tls_get_ctx(skb->sk); if (unlikely(tls_ctx->netdev != dev)) @@ -1922,8 +1925,6 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) /* copy skb contents into local skb */ chcr_ktls_skb_copy(skb, local_skb); - /* go through the skb and send only one record at a time. */ - data_len = skb->data_len; /* TCP segments can be in received either complete or partial. * chcr_end_part_handler will handle cases if complete record or end * part of the record is received. Incase of partial end part of record, @@ -2020,9 +2021,9 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) } while (data_len > 0); - tx_info->prev_seq = ntohl(th->seq) + skb->data_len; + tx_info->prev_seq = ntohl(th->seq) + skb_data_len; atomic64_inc(&port_stats->ktls_tx_encrypted_packets); - atomic64_add(skb->data_len, &port_stats->ktls_tx_encrypted_bytes); + atomic64_add(skb_data_len, &port_stats->ktls_tx_encrypted_bytes); /* tcp finish is set, send a separate tcp msg including all the options * as well. From patchwork Mon Nov 9 10:51:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rohit Maheshwari X-Patchwork-Id: 11890953 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F117DC388F7 for ; Mon, 9 Nov 2020 10:52:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AAA4320679 for ; Mon, 9 Nov 2020 10:52:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729395AbgKIKwm (ORCPT ); Mon, 9 Nov 2020 05:52:42 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:7783 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729343AbgKIKwk (ORCPT ); Mon, 9 Nov 2020 05:52:40 -0500 Received: from localhost.localdomain (redhouse.blr.asicdesigners.com [10.193.185.57]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id 0A9AqQMc011444; Mon, 9 Nov 2020 02:52:36 -0800 From: Rohit Maheshwari To: kuba@kernel.org, netdev@vger.kernel.org, davem@davemloft.net Cc: secdev@chelsio.com, Rohit Maheshwari Subject: [net v6 03/12] ch_ktls: Update cheksum information Date: Mon, 9 Nov 2020 16:21:33 +0530 Message-Id: <20201109105142.15398-4-rohitm@chelsio.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20201109105142.15398-1-rohitm@chelsio.com> References: <20201109105142.15398-1-rohitm@chelsio.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Checksum update was missing in the WR. Fixes: 429765a149f1 ("chcr: handle partial end part of a record") Signed-off-by: Rohit Maheshwari --- .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c index 447aec7ae954..b7a3e757ee72 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c @@ -959,6 +959,7 @@ chcr_ktls_write_tcp_options(struct chcr_ktls_info *tx_info, struct sk_buff *skb, struct iphdr *ip; int credits; u8 buf[150]; + u64 cntrl1; void *pos; iplen = skb_network_header_len(skb); @@ -997,22 +998,28 @@ chcr_ktls_write_tcp_options(struct chcr_ktls_info *tx_info, struct sk_buff *skb, TXPKT_PF_V(tx_info->adap->pf)); cpl->pack = 0; cpl->len = htons(pktlen); - /* checksum offload */ - cpl->ctrl1 = 0; - - pos = cpl + 1; memcpy(buf, skb->data, pktlen); if (tx_info->ip_family == AF_INET) { /* we need to correct ip header len */ ip = (struct iphdr *)(buf + maclen); ip->tot_len = htons(pktlen - maclen); + cntrl1 = TXPKT_CSUM_TYPE_V(TX_CSUM_TCPIP); #if IS_ENABLED(CONFIG_IPV6) } else { ip6 = (struct ipv6hdr *)(buf + maclen); ip6->payload_len = htons(pktlen - maclen - iplen); + cntrl1 = TXPKT_CSUM_TYPE_V(TX_CSUM_TCPIP6); #endif } + + cntrl1 |= T6_TXPKT_ETHHDR_LEN_V(maclen - ETH_HLEN) | + TXPKT_IPHDR_LEN_V(iplen); + /* checksum offload */ + cpl->ctrl1 = cpu_to_be64(cntrl1); + + pos = cpl + 1; + /* now take care of the tcp header, if fin is not set then clear push * bit as well, and if fin is set, it will be sent at the last so we * need to update the tcp sequence number as per the last packet. From patchwork Mon Nov 9 10:51:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rohit Maheshwari X-Patchwork-Id: 11890963 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21B6EC2D0A3 for ; Mon, 9 Nov 2020 10:52:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C0A78206E5 for ; Mon, 9 Nov 2020 10:52:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729343AbgKIKwq (ORCPT ); Mon, 9 Nov 2020 05:52:46 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:9293 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729228AbgKIKwo (ORCPT ); Mon, 9 Nov 2020 05:52:44 -0500 Received: from localhost.localdomain (redhouse.blr.asicdesigners.com [10.193.185.57]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id 0A9AqQMd011444; Mon, 9 Nov 2020 02:52:39 -0800 From: Rohit Maheshwari To: kuba@kernel.org, netdev@vger.kernel.org, davem@davemloft.net Cc: secdev@chelsio.com, Rohit Maheshwari Subject: [net v6 04/12] cxgb4/ch_ktls: creating skbs causes panic Date: Mon, 9 Nov 2020 16:21:34 +0530 Message-Id: <20201109105142.15398-5-rohitm@chelsio.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20201109105142.15398-1-rohitm@chelsio.com> References: <20201109105142.15398-1-rohitm@chelsio.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Creating SKB per tls record and freeing the original one causes panic. There will be race if connection reset is requested. By freeing original skb, refcnt will be decremented and that means, there is no pending record to send, and so tls_dev_del will be requested in control path while SKB of related connection is in queue. Better approach is to use same SKB to send one record (partial data) at a time. We still have to create a new SKB when partial last part of a record is requested. This fix introduces new API cxgb4_write_partial_sgl() to send partial part of skb. Present cxgb4_write_sgl can only provide feasibility to start from an offset which limits to header only and it can write sgls for the whole skb len. But this new API will help in both. It can start from any offset and can end writing in middle of the skb. v4->v5: - Removed extra changes. Fixes: 429765a149f1 ("chcr: handle partial end part of a record") Signed-off-by: Rohit Maheshwari --- drivers/net/ethernet/chelsio/cxgb4/cxgb4.h | 3 + drivers/net/ethernet/chelsio/cxgb4/sge.c | 108 +++++++ .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 284 +++++++----------- 3 files changed, 226 insertions(+), 169 deletions(-) diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h index 3352dad6ca99..27308600da15 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h @@ -2124,6 +2124,9 @@ void cxgb4_inline_tx_skb(const struct sk_buff *skb, const struct sge_txq *q, void cxgb4_write_sgl(const struct sk_buff *skb, struct sge_txq *q, struct ulptx_sgl *sgl, u64 *end, unsigned int start, const dma_addr_t *addr); +void cxgb4_write_partial_sgl(const struct sk_buff *skb, struct sge_txq *q, + struct ulptx_sgl *sgl, u64 *end, + const dma_addr_t *addr, u32 start, u32 send_len); void cxgb4_ring_tx_db(struct adapter *adap, struct sge_txq *q, int n); int t4_set_vlan_acl(struct adapter *adap, unsigned int mbox, unsigned int vf, u16 vlan); diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c index 01bd9c0dfe4e..196652a114c5 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/sge.c +++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c @@ -890,6 +890,114 @@ void cxgb4_write_sgl(const struct sk_buff *skb, struct sge_txq *q, } EXPORT_SYMBOL(cxgb4_write_sgl); +/* cxgb4_write_partial_sgl - populate SGL for partial packet + * @skb: the packet + * @q: the Tx queue we are writing into + * @sgl: starting location for writing the SGL + * @end: points right after the end of the SGL + * @addr: the list of bus addresses for the SGL elements + * @start: start offset in the SKB where partial data starts + * @len: length of data from @start to send out + * + * This API will handle sending out partial data of a skb if required. + * Unlike cxgb4_write_sgl, @start can be any offset into the skb data, + * and @len will decide how much data after @start offset to send out. + */ +void cxgb4_write_partial_sgl(const struct sk_buff *skb, struct sge_txq *q, + struct ulptx_sgl *sgl, u64 *end, + const dma_addr_t *addr, u32 start, u32 len) +{ + struct ulptx_sge_pair buf[MAX_SKB_FRAGS / 2 + 1] = {0}, *to; + u32 frag_size, skb_linear_data_len = skb_headlen(skb); + struct skb_shared_info *si = skb_shinfo(skb); + u8 i = 0, frag_idx = 0, nfrags = 0; + skb_frag_t *frag; + + /* Fill the first SGL either from linear data or from partial + * frag based on @start. + */ + if (unlikely(start < skb_linear_data_len)) { + frag_size = min(len, skb_linear_data_len - start); + sgl->len0 = htonl(frag_size); + sgl->addr0 = cpu_to_be64(addr[0] + start); + len -= frag_size; + nfrags++; + } else { + start -= skb_linear_data_len; + frag = &si->frags[frag_idx]; + frag_size = skb_frag_size(frag); + /* find the first frag */ + while (start >= frag_size) { + start -= frag_size; + frag_idx++; + frag = &si->frags[frag_idx]; + frag_size = skb_frag_size(frag); + } + + frag_size = min(len, skb_frag_size(frag) - start); + sgl->len0 = cpu_to_be32(frag_size); + sgl->addr0 = cpu_to_be64(addr[frag_idx + 1] + start); + len -= frag_size; + nfrags++; + frag_idx++; + } + + /* If the entire partial data fit in one SGL, then send it out + * now. + */ + if (!len) + goto done; + + /* Most of the complexity below deals with the possibility we hit the + * end of the queue in the middle of writing the SGL. For this case + * only we create the SGL in a temporary buffer and then copy it. + */ + to = (u8 *)end > (u8 *)q->stat ? buf : sgl->sge; + + /* If the skb couldn't fit in first SGL completely, fill the + * rest of the frags in subsequent SGLs. Note that each SGL + * pair can store 2 frags. + */ + while (len) { + frag_size = min(len, skb_frag_size(&si->frags[frag_idx])); + to->len[i & 1] = cpu_to_be32(frag_size); + to->addr[i & 1] = cpu_to_be64(addr[frag_idx + 1]); + if (i && (i & 1)) + to++; + nfrags++; + frag_idx++; + i++; + len -= frag_size; + } + + /* If we ended in an odd boundary, then set the second SGL's + * length in the pair to 0. + */ + if (i & 1) + to->len[1] = cpu_to_be32(0); + + /* Copy from temporary buffer to Tx ring, in case we hit the + * end of the queue in the middle of writing the SGL. + */ + if (unlikely((u8 *)end > (u8 *)q->stat)) { + u32 part0 = (u8 *)q->stat - (u8 *)sgl->sge, part1; + + if (likely(part0)) + memcpy(sgl->sge, buf, part0); + part1 = (u8 *)end - (u8 *)q->stat; + memcpy(q->desc, (u8 *)buf + part0, part1); + end = (void *)q->desc + part1; + } + + /* 0-pad to multiple of 16 */ + if ((uintptr_t)end & 8) + *end = 0; +done: + sgl->cmd_nsge = htonl(ULPTX_CMD_V(ULP_TX_SC_DSGL) | + ULPTX_NSGE_V(nfrags)); +} +EXPORT_SYMBOL(cxgb4_write_partial_sgl); + /* This function copies 64 byte coalesced work request to * memory mapped BAR2 space. For coalesced WR SGE fetches * data from the FIFO instead of from Host. diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c index b7a3e757ee72..950841988ffe 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c @@ -14,6 +14,50 @@ static LIST_HEAD(uld_ctx_list); static DEFINE_MUTEX(dev_mutex); +/* chcr_get_nfrags_to_send: get the remaining nfrags after start offset + * @skb: skb + * @start: start offset. + * @len: how much data to send after @start + */ +static int chcr_get_nfrags_to_send(struct sk_buff *skb, u32 start, u32 len) +{ + struct skb_shared_info *si = skb_shinfo(skb); + u32 frag_size, skb_linear_data_len = skb_headlen(skb); + u8 nfrags = 0, frag_idx = 0; + skb_frag_t *frag; + + /* if its a linear skb then return 1 */ + if (!skb_is_nonlinear(skb)) + return 1; + + if (unlikely(start < skb_linear_data_len)) { + frag_size = min(len, skb_linear_data_len - start); + start = 0; + } else { + start -= skb_linear_data_len; + + frag = &si->frags[frag_idx]; + frag_size = skb_frag_size(frag); + while (start >= frag_size) { + start -= frag_size; + frag_idx++; + frag = &si->frags[frag_idx]; + frag_size = skb_frag_size(frag); + } + frag_size = min(len, skb_frag_size(frag) - start); + } + len -= frag_size; + nfrags++; + + while (len) { + frag_size = min(len, skb_frag_size(&si->frags[frag_idx])); + len -= frag_size; + nfrags++; + frag_idx++; + } + return nfrags; +} + static int chcr_init_tcb_fields(struct chcr_ktls_info *tx_info); /* * chcr_ktls_save_keys: calculate and save crypto keys. @@ -865,35 +909,15 @@ static int chcr_ktls_xmit_tcb_cpls(struct chcr_ktls_info *tx_info, return 0; } -/* - * chcr_ktls_skb_copy - * @nskb - new skb where the frags to be added. - * @skb - old skb from which frags will be copied. - */ -static void chcr_ktls_skb_copy(struct sk_buff *skb, struct sk_buff *nskb) -{ - int i; - - for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { - skb_shinfo(nskb)->frags[i] = skb_shinfo(skb)->frags[i]; - __skb_frag_ref(&skb_shinfo(nskb)->frags[i]); - } - - skb_shinfo(nskb)->nr_frags = skb_shinfo(skb)->nr_frags; - nskb->len += skb->data_len; - nskb->data_len = skb->data_len; - nskb->truesize += skb->data_len; -} - /* * chcr_ktls_get_tx_flits * returns number of flits to be sent out, it includes key context length, WR * size and skb fragments. */ static unsigned int -chcr_ktls_get_tx_flits(const struct sk_buff *skb, unsigned int key_ctx_len) +chcr_ktls_get_tx_flits(u32 nr_frags, unsigned int key_ctx_len) { - return chcr_sgl_len(skb_shinfo(skb)->nr_frags) + + return chcr_sgl_len(nr_frags) + DIV_ROUND_UP(key_ctx_len + CHCR_KTLS_WR_SIZE, 8); } @@ -1038,71 +1062,6 @@ chcr_ktls_write_tcp_options(struct chcr_ktls_info *tx_info, struct sk_buff *skb, return 0; } -/* chcr_ktls_skb_shift - Shifts request length paged data from skb to another. - * @tgt- buffer into which tail data gets added - * @skb- buffer from which the paged data comes from - * @shiftlen- shift up to this many bytes - */ -static int chcr_ktls_skb_shift(struct sk_buff *tgt, struct sk_buff *skb, - int shiftlen) -{ - skb_frag_t *fragfrom, *fragto; - int from, to, todo; - - WARN_ON(shiftlen > skb->data_len); - - todo = shiftlen; - from = 0; - to = 0; - fragfrom = &skb_shinfo(skb)->frags[from]; - - while ((todo > 0) && (from < skb_shinfo(skb)->nr_frags)) { - fragfrom = &skb_shinfo(skb)->frags[from]; - fragto = &skb_shinfo(tgt)->frags[to]; - - if (todo >= skb_frag_size(fragfrom)) { - *fragto = *fragfrom; - todo -= skb_frag_size(fragfrom); - from++; - to++; - - } else { - __skb_frag_ref(fragfrom); - skb_frag_page_copy(fragto, fragfrom); - skb_frag_off_copy(fragto, fragfrom); - skb_frag_size_set(fragto, todo); - - skb_frag_off_add(fragfrom, todo); - skb_frag_size_sub(fragfrom, todo); - todo = 0; - - to++; - break; - } - } - - /* Ready to "commit" this state change to tgt */ - skb_shinfo(tgt)->nr_frags = to; - - /* Reposition in the original skb */ - to = 0; - while (from < skb_shinfo(skb)->nr_frags) - skb_shinfo(skb)->frags[to++] = skb_shinfo(skb)->frags[from++]; - - skb_shinfo(skb)->nr_frags = to; - - WARN_ON(todo > 0 && !skb_shinfo(skb)->nr_frags); - - skb->len -= shiftlen; - skb->data_len -= shiftlen; - skb->truesize -= shiftlen; - tgt->len += shiftlen; - tgt->data_len += shiftlen; - tgt->truesize += shiftlen; - - return shiftlen; -} - /* * chcr_ktls_xmit_wr_complete: This sends out the complete record. If an skb * received has partial end part of the record, send out the complete record, so @@ -1118,6 +1077,8 @@ static int chcr_ktls_skb_shift(struct sk_buff *tgt, struct sk_buff *skb, static int chcr_ktls_xmit_wr_complete(struct sk_buff *skb, struct chcr_ktls_info *tx_info, struct sge_eth_txq *q, u32 tcp_seq, + bool is_last_wr, u32 data_len, + u32 skb_offset, u32 nfrags, bool tcp_push, u32 mss) { u32 len16, wr_mid = 0, flits = 0, ndesc, cipher_start; @@ -1133,7 +1094,7 @@ static int chcr_ktls_xmit_wr_complete(struct sk_buff *skb, u64 *end; /* get the number of flits required */ - flits = chcr_ktls_get_tx_flits(skb, tx_info->key_ctx_len); + flits = chcr_ktls_get_tx_flits(nfrags, tx_info->key_ctx_len); /* number of descriptors */ ndesc = chcr_flits_to_desc(flits); /* check if enough credits available */ @@ -1162,6 +1123,9 @@ static int chcr_ktls_xmit_wr_complete(struct sk_buff *skb, return NETDEV_TX_BUSY; } + if (!is_last_wr) + skb_get(skb); + pos = &q->q.desc[q->q.pidx]; end = (u64 *)pos + flits; /* FW_ULPTX_WR */ @@ -1194,7 +1158,7 @@ static int chcr_ktls_xmit_wr_complete(struct sk_buff *skb, CPL_TX_SEC_PDU_CPLLEN_V(CHCR_CPL_TX_SEC_PDU_LEN_64BIT) | CPL_TX_SEC_PDU_PLACEHOLDER_V(1) | CPL_TX_SEC_PDU_IVINSRTOFST_V(TLS_HEADER_SIZE + 1)); - cpl->pldlen = htonl(skb->data_len); + cpl->pldlen = htonl(data_len); /* encryption should start after tls header size + iv size */ cipher_start = TLS_HEADER_SIZE + tx_info->iv_size + 1; @@ -1236,7 +1200,7 @@ static int chcr_ktls_xmit_wr_complete(struct sk_buff *skb, /* CPL_TX_DATA */ tx_data = (void *)pos; OPCODE_TID(tx_data) = htonl(MK_OPCODE_TID(CPL_TX_DATA, tx_info->tid)); - tx_data->len = htonl(TX_DATA_MSS_V(mss) | TX_LENGTH_V(skb->data_len)); + tx_data->len = htonl(TX_DATA_MSS_V(mss) | TX_LENGTH_V(data_len)); tx_data->rsvd = htonl(tcp_seq); @@ -1256,8 +1220,8 @@ static int chcr_ktls_xmit_wr_complete(struct sk_buff *skb, } /* send the complete packet except the header */ - cxgb4_write_sgl(skb, &q->q, pos, end, skb->len - skb->data_len, - sgl_sdesc->addr); + cxgb4_write_partial_sgl(skb, &q->q, pos, end, sgl_sdesc->addr, + skb_offset, data_len); sgl_sdesc->skb = skb; chcr_txq_advance(&q->q, ndesc); @@ -1289,10 +1253,11 @@ static int chcr_ktls_xmit_wr_short(struct sk_buff *skb, struct sge_eth_txq *q, u32 tcp_seq, bool tcp_push, u32 mss, u32 tls_rec_offset, u8 *prior_data, - u32 prior_data_len) + u32 prior_data_len, u32 data_len, + u32 skb_offset) { + u32 len16, wr_mid = 0, cipher_start, nfrags; struct adapter *adap = tx_info->adap; - u32 len16, wr_mid = 0, cipher_start; unsigned int flits = 0, ndesc; int credits, left, last_desc; struct tx_sw_desc *sgl_sdesc; @@ -1305,10 +1270,11 @@ static int chcr_ktls_xmit_wr_short(struct sk_buff *skb, void *pos; u64 *end; + nfrags = chcr_get_nfrags_to_send(skb, skb_offset, data_len); /* get the number of flits required, it's a partial record so 2 flits * (AES_BLOCK_SIZE) will be added. */ - flits = chcr_ktls_get_tx_flits(skb, tx_info->key_ctx_len) + 2; + flits = chcr_ktls_get_tx_flits(nfrags, tx_info->key_ctx_len) + 2; /* get the correct 8 byte IV of this record */ iv_record = cpu_to_be64(tx_info->iv + tx_info->record_no); /* If it's a middle record and not 16 byte aligned to run AES CTR, need @@ -1380,7 +1346,7 @@ static int chcr_ktls_xmit_wr_short(struct sk_buff *skb, htonl(CPL_TX_SEC_PDU_OPCODE_V(CPL_TX_SEC_PDU) | CPL_TX_SEC_PDU_CPLLEN_V(CHCR_CPL_TX_SEC_PDU_LEN_64BIT) | CPL_TX_SEC_PDU_IVINSRTOFST_V(1)); - cpl->pldlen = htonl(skb->data_len + AES_BLOCK_LEN + prior_data_len); + cpl->pldlen = htonl(data_len + AES_BLOCK_LEN + prior_data_len); cpl->aadstart_cipherstop_hi = htonl(CPL_TX_SEC_PDU_CIPHERSTART_V(cipher_start)); cpl->cipherstop_lo_authinsert = 0; @@ -1411,7 +1377,7 @@ static int chcr_ktls_xmit_wr_short(struct sk_buff *skb, tx_data = (void *)pos; OPCODE_TID(tx_data) = htonl(MK_OPCODE_TID(CPL_TX_DATA, tx_info->tid)); tx_data->len = htonl(TX_DATA_MSS_V(mss) | - TX_LENGTH_V(skb->data_len + prior_data_len)); + TX_LENGTH_V(data_len + prior_data_len)); tx_data->rsvd = htonl(tcp_seq); tx_data->flags = htonl(TX_BYPASS_F); if (tcp_push) @@ -1444,8 +1410,8 @@ static int chcr_ktls_xmit_wr_short(struct sk_buff *skb, if (prior_data_len) pos = chcr_copy_to_txd(prior_data, &q->q, pos, 16); /* send the complete packet except the header */ - cxgb4_write_sgl(skb, &q->q, pos, end, skb->len - skb->data_len, - sgl_sdesc->addr); + cxgb4_write_partial_sgl(skb, &q->q, pos, end, sgl_sdesc->addr, + skb_offset, data_len); sgl_sdesc->skb = skb; chcr_txq_advance(&q->q, ndesc); @@ -1473,6 +1439,7 @@ static int chcr_ktls_tx_plaintxt(struct chcr_ktls_info *tx_info, struct sk_buff *skb, u32 tcp_seq, u32 mss, bool tcp_push, struct sge_eth_txq *q, u32 port_id, u8 *prior_data, + u32 data_len, u32 skb_offset, u32 prior_data_len) { int credits, left, len16, last_desc; @@ -1482,14 +1449,16 @@ static int chcr_ktls_tx_plaintxt(struct chcr_ktls_info *tx_info, struct ulptx_idata *idata; struct ulp_txpkt *ulptx; struct fw_ulptx_wr *wr; - u32 wr_mid = 0; + u32 wr_mid = 0, nfrags; void *pos; u64 *end; flits = DIV_ROUND_UP(CHCR_PLAIN_TX_DATA_LEN, 8); - flits += chcr_sgl_len(skb_shinfo(skb)->nr_frags); + nfrags = chcr_get_nfrags_to_send(skb, skb_offset, data_len); + flits += chcr_sgl_len(nfrags); if (prior_data_len) flits += 2; + /* WR will need len16 */ len16 = DIV_ROUND_UP(flits, 2); /* check how many descriptors needed */ @@ -1542,7 +1511,7 @@ static int chcr_ktls_tx_plaintxt(struct chcr_ktls_info *tx_info, tx_data = (struct cpl_tx_data *)(idata + 1); OPCODE_TID(tx_data) = htonl(MK_OPCODE_TID(CPL_TX_DATA, tx_info->tid)); tx_data->len = htonl(TX_DATA_MSS_V(mss) | - TX_LENGTH_V(skb->data_len + prior_data_len)); + TX_LENGTH_V(data_len + prior_data_len)); /* set tcp seq number */ tx_data->rsvd = htonl(tcp_seq); tx_data->flags = htonl(TX_BYPASS_F); @@ -1566,8 +1535,8 @@ static int chcr_ktls_tx_plaintxt(struct chcr_ktls_info *tx_info, end = pos + left; } /* send the complete packet including the header */ - cxgb4_write_sgl(skb, &q->q, pos, end, skb->len - skb->data_len, - sgl_sdesc->addr); + cxgb4_write_partial_sgl(skb, &q->q, pos, end, sgl_sdesc->addr, + skb_offset, data_len); sgl_sdesc->skb = skb; chcr_txq_advance(&q->q, ndesc); @@ -1578,9 +1547,11 @@ static int chcr_ktls_tx_plaintxt(struct chcr_ktls_info *tx_info, /* * chcr_ktls_copy_record_in_skb * @nskb - new skb where the frags to be added. + * @skb - old skb, to copy socket and destructor details. * @record - specific record which has complete 16k record in frags. */ static void chcr_ktls_copy_record_in_skb(struct sk_buff *nskb, + struct sk_buff *skb, struct tls_record_info *record) { int i = 0; @@ -1595,6 +1566,9 @@ static void chcr_ktls_copy_record_in_skb(struct sk_buff *nskb, nskb->data_len = record->len; nskb->len += record->len; nskb->truesize += record->len; + nskb->sk = skb->sk; + nskb->destructor = skb->destructor; + refcount_add(nskb->truesize, &nskb->sk->sk_wmem_alloc); } /* @@ -1666,7 +1640,7 @@ static int chcr_end_part_handler(struct chcr_ktls_info *tx_info, struct sk_buff *skb, struct tls_record_info *record, u32 tcp_seq, int mss, bool tcp_push_no_fin, - struct sge_eth_txq *q, + struct sge_eth_txq *q, u32 skb_offset, u32 tls_end_offset, bool last_wr) { struct sk_buff *nskb = NULL; @@ -1675,13 +1649,14 @@ static int chcr_end_part_handler(struct chcr_ktls_info *tx_info, nskb = skb; atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_complete_pkts); } else { - dev_kfree_skb_any(skb); - - nskb = alloc_skb(0, GFP_KERNEL); - if (!nskb) + nskb = alloc_skb(0, GFP_ATOMIC); + if (!nskb) { + dev_kfree_skb_any(skb); return NETDEV_TX_BUSY; + } + /* copy complete record in skb */ - chcr_ktls_copy_record_in_skb(nskb, record); + chcr_ktls_copy_record_in_skb(nskb, skb, record); /* packet is being sent from the beginning, update the tcp_seq * accordingly. */ @@ -1691,10 +1666,20 @@ static int chcr_end_part_handler(struct chcr_ktls_info *tx_info, */ if (chcr_ktls_update_snd_una(tx_info, q)) goto out; + /* reset skb offset */ + skb_offset = 0; + + if (last_wr) + dev_kfree_skb_any(skb); + + last_wr = true; + atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_end_pkts); } if (chcr_ktls_xmit_wr_complete(nskb, tx_info, q, tcp_seq, + last_wr, record->len, skb_offset, + record->num_frags, (last_wr && tcp_push_no_fin), mss)) { goto out; @@ -1730,41 +1715,32 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info, struct sk_buff *skb, struct tls_record_info *record, u32 tcp_seq, int mss, bool tcp_push_no_fin, + u32 data_len, u32 skb_offset, struct sge_eth_txq *q, u32 tls_end_offset) { u32 tls_rec_offset = tcp_seq - tls_record_start_seq(record); u8 prior_data[16] = {0}; u32 prior_data_len = 0; - u32 data_len; /* check if the skb is ending in middle of tag/HASH, its a big * trouble, send the packet before the HASH. */ - int remaining_record = tls_end_offset - skb->data_len; + int remaining_record = tls_end_offset - data_len; if (remaining_record > 0 && remaining_record < TLS_CIPHER_AES_GCM_128_TAG_SIZE) { - int trimmed_len = skb->data_len - + int trimmed_len = data_len - (TLS_CIPHER_AES_GCM_128_TAG_SIZE - remaining_record); - struct sk_buff *tmp_skb = NULL; /* don't process the pkt if it is only a partial tag */ - if (skb->data_len < TLS_CIPHER_AES_GCM_128_TAG_SIZE) + if (data_len < TLS_CIPHER_AES_GCM_128_TAG_SIZE) goto out; - WARN_ON(trimmed_len > skb->data_len); - - /* shift to those many bytes */ - tmp_skb = alloc_skb(0, GFP_KERNEL); - if (unlikely(!tmp_skb)) - goto out; + WARN_ON(trimmed_len > data_len); - chcr_ktls_skb_shift(tmp_skb, skb, trimmed_len); - /* free the last trimmed portion */ - dev_kfree_skb_any(skb); - skb = tmp_skb; + data_len = trimmed_len; atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_trimmed_pkts); } - data_len = skb->data_len; + /* check if the middle record's start point is 16 byte aligned. CTR * needs 16 byte aligned start point to start encryption. */ @@ -1825,9 +1801,6 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info, } /* reset tcp_seq as per the prior_data_required len */ tcp_seq -= prior_data_len; - /* include prio_data_len for further calculation. - */ - data_len += prior_data_len; } /* reset snd una, so the middle record won't send the already * sent part. @@ -1844,6 +1817,7 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info, tcp_push_no_fin, q, tx_info->port_id, prior_data, + data_len, skb_offset, prior_data_len)) { goto out; } @@ -1854,7 +1828,7 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info, if (chcr_ktls_xmit_wr_short(skb, tx_info, q, tcp_seq, tcp_push_no_fin, mss, tls_rec_offset, prior_data, - prior_data_len)) { + prior_data_len, data_len, skb_offset)) { goto out; } @@ -1876,7 +1850,6 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) struct tls_record_info *record; struct chcr_ktls_info *tx_info; struct tls_context *tls_ctx; - struct sk_buff *local_skb; struct sge_eth_txq *q; struct adapter *adap; unsigned long flags; @@ -1898,14 +1871,6 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) if (unlikely(!tx_info)) goto out; - /* don't touch the original skb, make a new skb to extract each records - * and send them separately. - */ - local_skb = alloc_skb(0, GFP_KERNEL); - - if (unlikely(!local_skb)) - return NETDEV_TX_BUSY; - adap = tx_info->adap; stats = &adap->ch_ktls_stats; port_stats = &stats->ktls_port[tx_info->port_id]; @@ -1925,13 +1890,9 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) ntohl(th->ack_seq), ntohs(th->window)); if (ret) { - dev_kfree_skb_any(local_skb); return NETDEV_TX_BUSY; } - /* copy skb contents into local skb */ - chcr_ktls_skb_copy(skb, local_skb); - /* TCP segments can be in received either complete or partial. * chcr_end_part_handler will handle cases if complete record or end * part of the record is received. Incase of partial end part of record, @@ -1961,7 +1922,6 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) atomic64_inc(&port_stats->ktls_tx_skip_no_sync_data); goto out; } - /* increase page reference count of the record, so that there * won't be any chance of page free in middle if in case stack * receives ACK and try to delete the record. @@ -1977,44 +1937,28 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) tcp_seq, record->end_seq, tx_info->prev_seq, data_len); /* if a tls record is finishing in this SKB */ if (tls_end_offset <= data_len) { - struct sk_buff *nskb = NULL; - - if (tls_end_offset < data_len) { - nskb = alloc_skb(0, GFP_KERNEL); - if (unlikely(!nskb)) { - ret = -ENOMEM; - goto clear_ref; - } - - chcr_ktls_skb_shift(nskb, local_skb, - tls_end_offset); - } else { - /* its the only record in this skb, directly - * point it. - */ - nskb = local_skb; - } - ret = chcr_end_part_handler(tx_info, nskb, record, + ret = chcr_end_part_handler(tx_info, skb, record, tcp_seq, mss, (!th->fin && th->psh), q, + skb_offset, tls_end_offset, - (nskb == local_skb)); - - if (ret && nskb != local_skb) - dev_kfree_skb_any(local_skb); + skb_offset + + tls_end_offset == skb->len); data_len -= tls_end_offset; /* tcp_seq increment is required to handle next record. */ tcp_seq += tls_end_offset; + skb_offset += tls_end_offset; } else { - ret = chcr_short_record_handler(tx_info, local_skb, + ret = chcr_short_record_handler(tx_info, skb, record, tcp_seq, mss, (!th->fin && th->psh), + data_len, skb_offset, q, tls_end_offset); data_len = 0; } -clear_ref: + /* clear the frag ref count which increased locally before */ for (i = 0; i < record->num_frags; i++) { /* clear the frag ref count */ @@ -2022,7 +1966,8 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) } /* if any failure, come out from the loop. */ if (ret) - goto out; + return NETDEV_TX_OK; + /* length should never be less than 0 */ WARN_ON(data_len < 0); @@ -2038,6 +1983,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) if (th->fin) chcr_ktls_write_tcp_options(tx_info, skb, q, tx_info->tx_chan); + return NETDEV_TX_OK; out: dev_kfree_skb_any(skb); return NETDEV_TX_OK; From patchwork Mon Nov 9 10:51:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rohit Maheshwari X-Patchwork-Id: 11890965 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4789C388F7 for ; Mon, 9 Nov 2020 10:52:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 56E21206E5 for ; Mon, 9 Nov 2020 10:52:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729413AbgKIKwt (ORCPT ); Mon, 9 Nov 2020 05:52:49 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:52196 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729402AbgKIKwr (ORCPT ); Mon, 9 Nov 2020 05:52:47 -0500 Received: from localhost.localdomain (redhouse.blr.asicdesigners.com [10.193.185.57]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id 0A9AqQMe011444; Mon, 9 Nov 2020 02:52:42 -0800 From: Rohit Maheshwari To: kuba@kernel.org, netdev@vger.kernel.org, davem@davemloft.net Cc: secdev@chelsio.com, Rohit Maheshwari Subject: [net v6 05/12] ch_ktls: Correction in trimmed_len calculation Date: Mon, 9 Nov 2020 16:21:35 +0530 Message-Id: <20201109105142.15398-6-rohitm@chelsio.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20201109105142.15398-1-rohitm@chelsio.com> References: <20201109105142.15398-1-rohitm@chelsio.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org trimmed length calculation goes wrong if skb has only tag part to send. It should be zero if there is no data bytes apart from TAG. Fixes: dc05f3df8fac ("chcr: Handle first or middle part of record") Signed-off-by: Rohit Maheshwari --- .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c index 950841988ffe..4286decce095 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c @@ -1729,10 +1729,13 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info, if (remaining_record > 0 && remaining_record < TLS_CIPHER_AES_GCM_128_TAG_SIZE) { - int trimmed_len = data_len - - (TLS_CIPHER_AES_GCM_128_TAG_SIZE - remaining_record); - /* don't process the pkt if it is only a partial tag */ - if (data_len < TLS_CIPHER_AES_GCM_128_TAG_SIZE) + int trimmed_len = 0; + + if (tls_end_offset > TLS_CIPHER_AES_GCM_128_TAG_SIZE) + trimmed_len = data_len - + (TLS_CIPHER_AES_GCM_128_TAG_SIZE - + remaining_record); + if (!trimmed_len) goto out; WARN_ON(trimmed_len > data_len); From patchwork Mon Nov 9 10:51:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rohit Maheshwari X-Patchwork-Id: 11890951 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0D38C388F7 for ; Mon, 9 Nov 2020 10:52:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 54617206E5 for ; Mon, 9 Nov 2020 10:52:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729418AbgKIKwv (ORCPT ); Mon, 9 Nov 2020 05:52:51 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:50268 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729402AbgKIKwu (ORCPT ); Mon, 9 Nov 2020 05:52:50 -0500 Received: from localhost.localdomain (redhouse.blr.asicdesigners.com [10.193.185.57]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id 0A9AqQMf011444; Mon, 9 Nov 2020 02:52:46 -0800 From: Rohit Maheshwari To: kuba@kernel.org, netdev@vger.kernel.org, davem@davemloft.net Cc: secdev@chelsio.com, Rohit Maheshwari Subject: [net v6 06/12] ch_ktls: missing handling of header alone Date: Mon, 9 Nov 2020 16:21:36 +0530 Message-Id: <20201109105142.15398-7-rohitm@chelsio.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20201109105142.15398-1-rohitm@chelsio.com> References: <20201109105142.15398-1-rohitm@chelsio.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org If an skb has only header part which doesn't start from beginning, is not being handled properly. Fixes: dc05f3df8fac ("chcr: Handle first or middle part of record") Signed-off-by: Rohit Maheshwari --- .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 25 ++++++++----------- 1 file changed, 11 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c index 4286decce095..8a54fce9bfae 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c @@ -1744,6 +1744,17 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info, atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_trimmed_pkts); } + /* check if it is only the header part. */ + if (tls_rec_offset + data_len <= (TLS_HEADER_SIZE + tx_info->iv_size)) { + if (chcr_ktls_tx_plaintxt(tx_info, skb, tcp_seq, mss, + tcp_push_no_fin, q, + tx_info->port_id, prior_data, + data_len, skb_offset, prior_data_len)) + goto out; + + return 0; + } + /* check if the middle record's start point is 16 byte aligned. CTR * needs 16 byte aligned start point to start encryption. */ @@ -1812,20 +1823,6 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info, goto out; atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_middle_pkts); } else { - /* Else means, its a partial first part of the record. Check if - * its only the header, don't need to send for encryption then. - */ - if (data_len <= TLS_HEADER_SIZE + tx_info->iv_size) { - if (chcr_ktls_tx_plaintxt(tx_info, skb, tcp_seq, mss, - tcp_push_no_fin, q, - tx_info->port_id, - prior_data, - data_len, skb_offset, - prior_data_len)) { - goto out; - } - return 0; - } atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_start_pkts); } From patchwork Mon Nov 9 10:51:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rohit Maheshwari X-Patchwork-Id: 11890955 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD0E6C388F7 for ; Mon, 9 Nov 2020 10:52:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 67C84206E5 for ; Mon, 9 Nov 2020 10:52:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729425AbgKIKwz (ORCPT ); Mon, 9 Nov 2020 05:52:55 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:22698 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729195AbgKIKwz (ORCPT ); Mon, 9 Nov 2020 05:52:55 -0500 Received: from localhost.localdomain (redhouse.blr.asicdesigners.com [10.193.185.57]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id 0A9AqQMg011444; Mon, 9 Nov 2020 02:52:49 -0800 From: Rohit Maheshwari To: kuba@kernel.org, netdev@vger.kernel.org, davem@davemloft.net Cc: secdev@chelsio.com, Rohit Maheshwari Subject: [net v6 07/12] ch_ktls: Correction in middle record handling Date: Mon, 9 Nov 2020 16:21:37 +0530 Message-Id: <20201109105142.15398-8-rohitm@chelsio.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20201109105142.15398-1-rohitm@chelsio.com> References: <20201109105142.15398-1-rohitm@chelsio.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org If a record starts in middle, reset TCB UNA so that we could avoid sending out extra packet which is needed to make it 16 byte aligned to start AES CTR. Check also considers prev_seq, which should be what is actually sent, not the skb data length. Avoid updating partial TAG to HW at any point of time, that's why we need to check if remaining part is smaller than TAG size, then reset TX_MAX to be TAG starting sequence number. Fixes: 5a4b9fe7fece ("cxgb4/chcr: complete record tx handling") Signed-off-by: Rohit Maheshwari --- .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 50 ++++++++++++------- 1 file changed, 31 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c index 8a54fce9bfae..026c66599d1e 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c @@ -827,7 +827,7 @@ static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info, */ static int chcr_ktls_xmit_tcb_cpls(struct chcr_ktls_info *tx_info, struct sge_eth_txq *q, u64 tcp_seq, - u64 tcp_ack, u64 tcp_win) + u64 tcp_ack, u64 tcp_win, bool offset) { bool first_wr = ((tx_info->prev_ack == 0) && (tx_info->prev_win == 0)); struct ch_ktls_port_stats_debug *port_stats; @@ -862,7 +862,7 @@ static int chcr_ktls_xmit_tcb_cpls(struct chcr_ktls_info *tx_info, cpl++; } /* reset snd una if it's a re-transmit pkt */ - if (tcp_seq != tx_info->prev_seq) { + if (tcp_seq != tx_info->prev_seq || offset) { /* reset snd_una */ port_stats = &tx_info->adap->ch_ktls_stats.ktls_port[tx_info->port_id]; @@ -871,7 +871,8 @@ static int chcr_ktls_xmit_tcb_cpls(struct chcr_ktls_info *tx_info, TCB_SND_UNA_RAW_V (TCB_SND_UNA_RAW_M), TCB_SND_UNA_RAW_V(0), 0); - atomic64_inc(&port_stats->ktls_tx_ooo); + if (tcp_seq != tx_info->prev_seq) + atomic64_inc(&port_stats->ktls_tx_ooo); cpl++; } /* update ack */ @@ -1661,11 +1662,6 @@ static int chcr_end_part_handler(struct chcr_ktls_info *tx_info, * accordingly. */ tcp_seq = tls_record_start_seq(record); - /* reset snd una, so the middle record won't send the already - * sent part. - */ - if (chcr_ktls_update_snd_una(tx_info, q)) - goto out; /* reset skb offset */ skb_offset = 0; @@ -1684,6 +1680,7 @@ static int chcr_end_part_handler(struct chcr_ktls_info *tx_info, mss)) { goto out; } + tx_info->prev_seq = record->end_seq; return 0; out: dev_kfree_skb_any(nskb); @@ -1752,6 +1749,7 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info, data_len, skb_offset, prior_data_len)) goto out; + tx_info->prev_seq = tcp_seq + data_len; return 0; } @@ -1832,6 +1830,7 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info, goto out; } + tx_info->prev_seq = tcp_seq + data_len + prior_data_len; return 0; out: dev_kfree_skb_any(skb); @@ -1885,13 +1884,6 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) if (ret) return NETDEV_TX_BUSY; } - /* update tcb */ - ret = chcr_ktls_xmit_tcb_cpls(tx_info, q, ntohl(th->seq), - ntohl(th->ack_seq), - ntohs(th->window)); - if (ret) { - return NETDEV_TX_BUSY; - } /* TCP segments can be in received either complete or partial. * chcr_end_part_handler will handle cases if complete record or end @@ -1922,6 +1914,30 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) atomic64_inc(&port_stats->ktls_tx_skip_no_sync_data); goto out; } + tls_end_offset = record->end_seq - tcp_seq; + + pr_debug("seq 0x%x, end_seq 0x%x prev_seq 0x%x, datalen 0x%x\n", + tcp_seq, record->end_seq, tx_info->prev_seq, data_len); + /* update tcb for the skb */ + if (skb_data_len == data_len) { + u32 tx_max = tcp_seq; + + if (!tls_record_is_start_marker(record) && + tls_end_offset < TLS_CIPHER_AES_GCM_128_TAG_SIZE) + tx_max = record->end_seq - + TLS_CIPHER_AES_GCM_128_TAG_SIZE; + + ret = chcr_ktls_xmit_tcb_cpls(tx_info, q, tx_max, + ntohl(th->ack_seq), + ntohs(th->window), + tls_end_offset != + record->len); + if (ret) { + spin_unlock_irqrestore(&tx_ctx->base.lock, + flags); + goto out; + } + } /* increase page reference count of the record, so that there * won't be any chance of page free in middle if in case stack * receives ACK and try to delete the record. @@ -1931,10 +1947,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) /* lock cleared */ spin_unlock_irqrestore(&tx_ctx->base.lock, flags); - tls_end_offset = record->end_seq - tcp_seq; - pr_debug("seq 0x%x, end_seq 0x%x prev_seq 0x%x, datalen 0x%x\n", - tcp_seq, record->end_seq, tx_info->prev_seq, data_len); /* if a tls record is finishing in this SKB */ if (tls_end_offset <= data_len) { ret = chcr_end_part_handler(tx_info, skb, record, @@ -1973,7 +1986,6 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) } while (data_len > 0); - tx_info->prev_seq = ntohl(th->seq) + skb_data_len; atomic64_inc(&port_stats->ktls_tx_encrypted_packets); atomic64_add(skb_data_len, &port_stats->ktls_tx_encrypted_bytes); From patchwork Mon Nov 9 10:51:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rohit Maheshwari X-Patchwork-Id: 11890973 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A98DCC388F7 for ; Mon, 9 Nov 2020 10:53:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5C1EB206E3 for ; Mon, 9 Nov 2020 10:53:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729432AbgKIKw6 (ORCPT ); Mon, 9 Nov 2020 05:52:58 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:30019 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728462AbgKIKw6 (ORCPT ); Mon, 9 Nov 2020 05:52:58 -0500 Received: from localhost.localdomain (redhouse.blr.asicdesigners.com [10.193.185.57]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id 0A9AqQMh011444; Mon, 9 Nov 2020 02:52:53 -0800 From: Rohit Maheshwari To: kuba@kernel.org, netdev@vger.kernel.org, davem@davemloft.net Cc: secdev@chelsio.com, Rohit Maheshwari Subject: [net v6 08/12] ch_ktls: packet handling prior to start marker Date: Mon, 9 Nov 2020 16:21:38 +0530 Message-Id: <20201109105142.15398-9-rohitm@chelsio.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20201109105142.15398-1-rohitm@chelsio.com> References: <20201109105142.15398-1-rohitm@chelsio.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org There could be a case where ACK for tls exchanges prior to start marker is missed out, and by the time tls is offloaded. This pkt should not be discarded and handled carefully. It could be plaintext alone or plaintext + finish as well. Fixes: 5a4b9fe7fece ("cxgb4/chcr: complete record tx handling") Signed-off-by: Rohit Maheshwari --- .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 38 ++++++++++++++++--- 1 file changed, 33 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c index 026c66599d1e..bbda71b7f98b 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c @@ -1909,11 +1909,6 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) goto out; } - if (unlikely(tls_record_is_start_marker(record))) { - spin_unlock_irqrestore(&tx_ctx->base.lock, flags); - atomic64_inc(&port_stats->ktls_tx_skip_no_sync_data); - goto out; - } tls_end_offset = record->end_seq - tcp_seq; pr_debug("seq 0x%x, end_seq 0x%x prev_seq 0x%x, datalen 0x%x\n", @@ -1938,6 +1933,39 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) goto out; } } + + if (unlikely(tls_record_is_start_marker(record))) { + atomic64_inc(&port_stats->ktls_tx_skip_no_sync_data); + /* If tls_end_offset < data_len, means there is some + * data after start marker, which needs encryption, send + * plaintext first and take skb refcount. else send out + * complete pkt as plaintext. + */ + if (tls_end_offset < data_len) + skb_get(skb); + else + tls_end_offset = data_len; + + ret = chcr_ktls_tx_plaintxt(tx_info, skb, tcp_seq, mss, + (!th->fin && th->psh), q, + tx_info->port_id, NULL, + tls_end_offset, skb_offset, + 0); + + spin_unlock_irqrestore(&tx_ctx->base.lock, flags); + if (ret) { + /* free the refcount taken earlier */ + if (tls_end_offset < data_len) + dev_kfree_skb_any(skb); + goto out; + } + + data_len -= tls_end_offset; + tcp_seq = record->end_seq; + skb_offset += tls_end_offset; + continue; + } + /* increase page reference count of the record, so that there * won't be any chance of page free in middle if in case stack * receives ACK and try to delete the record. From patchwork Mon Nov 9 10:51:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rohit Maheshwari X-Patchwork-Id: 11890967 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F524C2D0A3 for ; Mon, 9 Nov 2020 10:53:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0BE5C2080C for ; Mon, 9 Nov 2020 10:53:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729445AbgKIKxC (ORCPT ); Mon, 9 Nov 2020 05:53:02 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:21438 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728462AbgKIKxB (ORCPT ); Mon, 9 Nov 2020 05:53:01 -0500 Received: from localhost.localdomain (redhouse.blr.asicdesigners.com [10.193.185.57]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id 0A9AqQMi011444; Mon, 9 Nov 2020 02:52:56 -0800 From: Rohit Maheshwari To: kuba@kernel.org, netdev@vger.kernel.org, davem@davemloft.net Cc: secdev@chelsio.com, Rohit Maheshwari Subject: [net v6 09/12] ch_ktls: don't free skb before sending FIN Date: Mon, 9 Nov 2020 16:21:39 +0530 Message-Id: <20201109105142.15398-10-rohitm@chelsio.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20201109105142.15398-1-rohitm@chelsio.com> References: <20201109105142.15398-1-rohitm@chelsio.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org If its a last packet and fin is set. Make sure FIN is informed to HW before skb gets freed. Fixes: 429765a149f1 ("chcr: handle partial end part of a record") Signed-off-by: Rohit Maheshwari --- .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c index bbda71b7f98b..a8062e038ebc 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c @@ -1932,6 +1932,9 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) flags); goto out; } + + if (th->fin) + skb_get(skb); } if (unlikely(tls_record_is_start_marker(record))) { @@ -2006,8 +2009,11 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) __skb_frag_unref(&record->frags[i]); } /* if any failure, come out from the loop. */ - if (ret) + if (ret) { + if (th->fin) + dev_kfree_skb_any(skb); return NETDEV_TX_OK; + } /* length should never be less than 0 */ WARN_ON(data_len < 0); @@ -2020,8 +2026,10 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) /* tcp finish is set, send a separate tcp msg including all the options * as well. */ - if (th->fin) + if (th->fin) { chcr_ktls_write_tcp_options(tx_info, skb, q, tx_info->tx_chan); + dev_kfree_skb_any(skb); + } return NETDEV_TX_OK; out: From patchwork Mon Nov 9 10:51:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rohit Maheshwari X-Patchwork-Id: 11890969 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4151C388F7 for ; Mon, 9 Nov 2020 10:53:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 58CD4206E5 for ; Mon, 9 Nov 2020 10:53:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729452AbgKIKxG (ORCPT ); Mon, 9 Nov 2020 05:53:06 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:11502 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728462AbgKIKxG (ORCPT ); Mon, 9 Nov 2020 05:53:06 -0500 Received: from localhost.localdomain (redhouse.blr.asicdesigners.com [10.193.185.57]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id 0A9AqQMj011444; Mon, 9 Nov 2020 02:53:00 -0800 From: Rohit Maheshwari To: kuba@kernel.org, netdev@vger.kernel.org, davem@davemloft.net Cc: secdev@chelsio.com, Rohit Maheshwari Subject: [net v6 10/12] ch_ktls/cxgb4: handle partial tag alone SKBs Date: Mon, 9 Nov 2020 16:21:40 +0530 Message-Id: <20201109105142.15398-11-rohitm@chelsio.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20201109105142.15398-1-rohitm@chelsio.com> References: <20201109105142.15398-1-rohitm@chelsio.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org If TCP congestion caused a very small packets which only has some part fo the TAG, and that too is not till the end. HW can't handle such case, so falling back to sw crypto in such cases. v1->v2: - Marked chcr_ktls_sw_fallback() static. Fixes: dc05f3df8fac ("chcr: Handle first or middle part of record") Signed-off-by: Rohit Maheshwari --- .../ethernet/chelsio/cxgb4/cxgb4_debugfs.c | 2 + .../net/ethernet/chelsio/cxgb4/cxgb4_uld.h | 1 + .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 116 +++++++++++++++++- .../chelsio/inline_crypto/ch_ktls/chcr_ktls.h | 1 + 4 files changed, 119 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c index c24d34a937c8..7d49fd4edc9e 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c @@ -3573,6 +3573,8 @@ static int chcr_stats_show(struct seq_file *seq, void *v) atomic64_read(&adap->ch_ktls_stats.ktls_tx_complete_pkts)); seq_printf(seq, "TX trim pkts : %20llu\n", atomic64_read(&adap->ch_ktls_stats.ktls_tx_trimmed_pkts)); + seq_printf(seq, "TX sw fallback : %20llu\n", + atomic64_read(&adap->ch_ktls_stats.ktls_tx_fallback)); while (i < MAX_NPORTS) { ktls_port = &adap->ch_ktls_stats.ktls_port[i]; seq_printf(seq, "Port %d\n", i); diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h index e2a4941fa802..1b49f2fa9b18 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h @@ -388,6 +388,7 @@ struct ch_ktls_stats_debug { atomic64_t ktls_tx_retransmit_pkts; atomic64_t ktls_tx_complete_pkts; atomic64_t ktls_tx_trimmed_pkts; + atomic64_t ktls_tx_fallback; }; #endif diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c index a8062e038ebc..b182c940b4a0 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c @@ -1545,6 +1545,88 @@ static int chcr_ktls_tx_plaintxt(struct chcr_ktls_info *tx_info, return 0; } +static int chcr_ktls_tunnel_pkt(struct chcr_ktls_info *tx_info, + struct sk_buff *skb, + struct sge_eth_txq *q) +{ + u32 ctrl, iplen, maclen, wr_mid = 0, len16; + struct tx_sw_desc *sgl_sdesc; + struct fw_eth_tx_pkt_wr *wr; + struct cpl_tx_pkt_core *cpl; + unsigned int flits, ndesc; + int credits, last_desc; + u64 cntrl1, *end; + void *pos; + + ctrl = sizeof(*cpl); + flits = DIV_ROUND_UP(sizeof(*wr) + ctrl, 8); + + flits += chcr_sgl_len(skb_shinfo(skb)->nr_frags + 1); + len16 = DIV_ROUND_UP(flits, 2); + /* check how many descriptors needed */ + ndesc = DIV_ROUND_UP(flits, 8); + + credits = chcr_txq_avail(&q->q) - ndesc; + if (unlikely(credits < 0)) { + chcr_eth_txq_stop(q); + return -ENOMEM; + } + + if (unlikely(credits < ETHTXQ_STOP_THRES)) { + chcr_eth_txq_stop(q); + wr_mid |= FW_WR_EQUEQ_F | FW_WR_EQUIQ_F; + } + + last_desc = q->q.pidx + ndesc - 1; + if (last_desc >= q->q.size) + last_desc -= q->q.size; + sgl_sdesc = &q->q.sdesc[last_desc]; + + if (unlikely(cxgb4_map_skb(tx_info->adap->pdev_dev, skb, + sgl_sdesc->addr) < 0)) { + memset(sgl_sdesc->addr, 0, sizeof(sgl_sdesc->addr)); + q->mapping_err++; + return -ENOMEM; + } + + iplen = skb_network_header_len(skb); + maclen = skb_mac_header_len(skb); + + pos = &q->q.desc[q->q.pidx]; + end = (u64 *)pos + flits; + wr = pos; + + /* Firmware work request header */ + wr->op_immdlen = htonl(FW_WR_OP_V(FW_ETH_TX_PKT_WR) | + FW_WR_IMMDLEN_V(ctrl)); + + wr->equiq_to_len16 = htonl(wr_mid | FW_WR_LEN16_V(len16)); + wr->r3 = 0; + + cpl = (void *)(wr + 1); + + /* CPL header */ + cpl->ctrl0 = htonl(TXPKT_OPCODE_V(CPL_TX_PKT) | + TXPKT_INTF_V(tx_info->tx_chan) | + TXPKT_PF_V(tx_info->adap->pf)); + cpl->pack = 0; + cntrl1 = TXPKT_CSUM_TYPE_V(tx_info->ip_family == AF_INET ? + TX_CSUM_TCPIP : TX_CSUM_TCPIP6); + cntrl1 |= T6_TXPKT_ETHHDR_LEN_V(maclen - ETH_HLEN) | + TXPKT_IPHDR_LEN_V(iplen); + /* checksum offload */ + cpl->ctrl1 = cpu_to_be64(cntrl1); + cpl->len = htons(skb->len); + + pos = cpl + 1; + + cxgb4_write_sgl(skb, &q->q, pos, end, 0, sgl_sdesc->addr); + sgl_sdesc->skb = skb; + chcr_txq_advance(&q->q, ndesc); + cxgb4_ring_tx_db(tx_info->adap, &q->q, ndesc); + return 0; +} + /* * chcr_ktls_copy_record_in_skb * @nskb - new skb where the frags to be added. @@ -1733,7 +1815,7 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info, (TLS_CIPHER_AES_GCM_128_TAG_SIZE - remaining_record); if (!trimmed_len) - goto out; + return FALLBACK; WARN_ON(trimmed_len > data_len); @@ -1837,6 +1919,34 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info, return NETDEV_TX_BUSY; } +static int chcr_ktls_sw_fallback(struct sk_buff *skb, + struct chcr_ktls_info *tx_info, + struct sge_eth_txq *q) +{ + u32 data_len, skb_offset; + struct sk_buff *nskb; + struct tcphdr *th; + + nskb = tls_encrypt_skb(skb); + + if (!nskb) + return 0; + + th = tcp_hdr(nskb); + skb_offset = skb_transport_offset(nskb) + tcp_hdrlen(nskb); + data_len = nskb->len - skb_offset; + skb_tx_timestamp(nskb); + + if (chcr_ktls_tunnel_pkt(tx_info, nskb, q)) + goto out; + + tx_info->prev_seq = ntohl(th->seq) + data_len; + atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_fallback); + return 0; +out: + dev_kfree_skb_any(nskb); + return 0; +} /* nic tls TX handler */ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) { @@ -2012,6 +2122,10 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev) if (ret) { if (th->fin) dev_kfree_skb_any(skb); + + if (ret == FALLBACK) + return chcr_ktls_sw_fallback(skb, tx_info, q); + return NETDEV_TX_OK; } diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h index c1651b1431a0..18b3b1f02415 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h @@ -26,6 +26,7 @@ #define CHCR_KTLS_WR_SIZE (CHCR_PLAIN_TX_DATA_LEN +\ sizeof(struct cpl_tx_sec_pdu)) +#define FALLBACK 35 enum ch_ktls_open_state { CH_KTLS_OPEN_SUCCESS = 0, From patchwork Mon Nov 9 10:51:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rohit Maheshwari X-Patchwork-Id: 11890971 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BF14C2D0A3 for ; Mon, 9 Nov 2020 10:53:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5243320679 for ; Mon, 9 Nov 2020 10:53:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728917AbgKIKxJ (ORCPT ); Mon, 9 Nov 2020 05:53:09 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:7304 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729453AbgKIKxJ (ORCPT ); Mon, 9 Nov 2020 05:53:09 -0500 Received: from localhost.localdomain (redhouse.blr.asicdesigners.com [10.193.185.57]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id 0A9AqQMk011444; Mon, 9 Nov 2020 02:53:04 -0800 From: Rohit Maheshwari To: kuba@kernel.org, netdev@vger.kernel.org, davem@davemloft.net Cc: secdev@chelsio.com, Rohit Maheshwari Subject: [net v6 11/12] ch_ktls: tcb update fails sometimes Date: Mon, 9 Nov 2020 16:21:41 +0530 Message-Id: <20201109105142.15398-12-rohitm@chelsio.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20201109105142.15398-1-rohitm@chelsio.com> References: <20201109105142.15398-1-rohitm@chelsio.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org context id and port id should be filled while sending tcb update. Fixes: 5a4b9fe7fece ("cxgb4/chcr: complete record tx handling") Signed-off-by: Rohit Maheshwari --- .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c index b182c940b4a0..a732051b21e4 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c @@ -733,7 +733,8 @@ static int chcr_ktls_cpl_set_tcb_rpl(struct adapter *adap, unsigned char *input) } static void *__chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info, - u32 tid, void *pos, u16 word, u64 mask, + u32 tid, void *pos, u16 word, + struct sge_eth_txq *q, u64 mask, u64 val, u32 reply) { struct cpl_set_tcb_field_core *cpl; @@ -742,7 +743,10 @@ static void *__chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info, /* ULP_TXPKT */ txpkt = pos; - txpkt->cmd_dest = htonl(ULPTX_CMD_V(ULP_TX_PKT) | ULP_TXPKT_DEST_V(0)); + txpkt->cmd_dest = htonl(ULPTX_CMD_V(ULP_TX_PKT) | + ULP_TXPKT_CHANNELID_V(tx_info->port_id) | + ULP_TXPKT_FID_V(q->q.cntxt_id) | + ULP_TXPKT_RO_F); txpkt->len = htonl(DIV_ROUND_UP(CHCR_SET_TCB_FIELD_LEN, 16)); /* ULPTX_IDATA sub-command */ @@ -797,7 +801,7 @@ static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info, } else { u8 buf[48] = {0}; - __chcr_write_cpl_set_tcb_ulp(tx_info, tid, buf, word, + __chcr_write_cpl_set_tcb_ulp(tx_info, tid, buf, word, q, mask, val, reply); return chcr_copy_to_txd(buf, &q->q, pos, @@ -805,7 +809,7 @@ static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info, } } - pos = __chcr_write_cpl_set_tcb_ulp(tx_info, tid, pos, word, + pos = __chcr_write_cpl_set_tcb_ulp(tx_info, tid, pos, word, q, mask, val, reply); /* check again if we are at the end of the queue */ From patchwork Mon Nov 9 10:51:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rohit Maheshwari X-Patchwork-Id: 11890975 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B01EC388F7 for ; Mon, 9 Nov 2020 10:53:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5543B206E5 for ; Mon, 9 Nov 2020 10:53:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729462AbgKIKxM (ORCPT ); Mon, 9 Nov 2020 05:53:12 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:64822 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729453AbgKIKxL (ORCPT ); Mon, 9 Nov 2020 05:53:11 -0500 Received: from localhost.localdomain (redhouse.blr.asicdesigners.com [10.193.185.57]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id 0A9AqQMl011444; Mon, 9 Nov 2020 02:53:07 -0800 From: Rohit Maheshwari To: kuba@kernel.org, netdev@vger.kernel.org, davem@davemloft.net Cc: secdev@chelsio.com, Rohit Maheshwari Subject: [net v6 12/12] ch_ktls: stop the txq if reaches threshold Date: Mon, 9 Nov 2020 16:21:42 +0530 Message-Id: <20201109105142.15398-13-rohitm@chelsio.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20201109105142.15398-1-rohitm@chelsio.com> References: <20201109105142.15398-1-rohitm@chelsio.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Stop the queue and ask for the credits if queue reaches to threashold. Fixes: 5a4b9fe7fece ("cxgb4/chcr: complete record tx handling") Signed-off-by: Rohit Maheshwari --- .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c index a732051b21e4..c24485c0d512 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c @@ -835,7 +835,7 @@ static int chcr_ktls_xmit_tcb_cpls(struct chcr_ktls_info *tx_info, { bool first_wr = ((tx_info->prev_ack == 0) && (tx_info->prev_win == 0)); struct ch_ktls_port_stats_debug *port_stats; - u32 len, cpl = 0, ndesc, wr_len; + u32 len, cpl = 0, ndesc, wr_len, wr_mid = 0; struct fw_ulptx_wr *wr; int credits; void *pos; @@ -851,6 +851,11 @@ static int chcr_ktls_xmit_tcb_cpls(struct chcr_ktls_info *tx_info, return NETDEV_TX_BUSY; } + if (unlikely(credits < ETHTXQ_STOP_THRES)) { + chcr_eth_txq_stop(q); + wr_mid |= FW_WR_EQUEQ_F | FW_WR_EQUIQ_F; + } + pos = &q->q.desc[q->q.pidx]; /* make space for WR, we'll fill it later when we know all the cpls * being sent out and have complete length. @@ -905,7 +910,8 @@ static int chcr_ktls_xmit_tcb_cpls(struct chcr_ktls_info *tx_info, wr->op_to_compl = htonl(FW_WR_OP_V(FW_ULPTX_WR)); wr->cookie = 0; /* fill len in wr field */ - wr->flowid_len16 = htonl(FW_WR_LEN16_V(DIV_ROUND_UP(len, 16))); + wr->flowid_len16 = htonl(wr_mid | + FW_WR_LEN16_V(DIV_ROUND_UP(len, 16))); ndesc = DIV_ROUND_UP(len, 64); chcr_txq_advance(&q->q, ndesc); @@ -986,6 +992,7 @@ chcr_ktls_write_tcp_options(struct chcr_ktls_info *tx_info, struct sk_buff *skb, struct tcphdr *tcp; int len16, pktlen; struct iphdr *ip; + u32 wr_mid = 0; int credits; u8 buf[150]; u64 cntrl1; @@ -1010,6 +1017,11 @@ chcr_ktls_write_tcp_options(struct chcr_ktls_info *tx_info, struct sk_buff *skb, return NETDEV_TX_BUSY; } + if (unlikely(credits < ETHTXQ_STOP_THRES)) { + chcr_eth_txq_stop(q); + wr_mid |= FW_WR_EQUEQ_F | FW_WR_EQUIQ_F; + } + pos = &q->q.desc[q->q.pidx]; wr = pos; @@ -1017,7 +1029,7 @@ chcr_ktls_write_tcp_options(struct chcr_ktls_info *tx_info, struct sk_buff *skb, wr->op_immdlen = htonl(FW_WR_OP_V(FW_ETH_TX_PKT_WR) | FW_WR_IMMDLEN_V(ctrl)); - wr->equiq_to_len16 = htonl(FW_WR_LEN16_V(len16)); + wr->equiq_to_len16 = htonl(wr_mid | FW_WR_LEN16_V(len16)); wr->r3 = 0; cpl = (void *)(wr + 1);