From patchwork Tue Sep 10 12:48:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jaiswal X-Patchwork-Id: 13798490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 679EEEB64DE for ; Tue, 10 Sep 2024 12:51:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=QLxu6xuI7/T7WoWyBPd/0v1krzM5A9Tib1ZVe5b9ybc=; b=PNyT3K9Lf2JI4h75MBE6QlC7NX sGheEv/TtiWdjNPC6vT6IKt2UaB43r6BYffLWbtLNDJI3WcZMXL4LWzJYZo99jaKjfjTo2qTV/sOH ncMEBcfTDlo3YdQ8D+1RUzm3RncdxXqqAAZGE/cplB+LvDvXz0u6YYieYcY3FnmnL7cb8DWDb1Aub FBcQ79xpKeeEMutH8ZXZRjk9xgQiB+nXi6PwnFYZas2yyzdvrG0GzbQ0bRW/in5Ogm3D6XRc0ELaI p+eyR+WRqFoK263RRe9LvbIjEcWmmgeI+UxLTKIEeVzNYc0NuYqEtuU8E5mO3UiTomCxMOMpnmT8k ZaZviLwQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1so0Ki-00000005cE5-3sPU; Tue, 10 Sep 2024 12:51:01 +0000 Received: from mx0b-0031df01.pphosted.com ([205.220.180.131]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1so0J5-00000005bpe-2lWM for linux-arm-kernel@lists.infradead.org; Tue, 10 Sep 2024 12:49:21 +0000 Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 48A3mWQs012504; Tue, 10 Sep 2024 12:49:06 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= QLxu6xuI7/T7WoWyBPd/0v1krzM5A9Tib1ZVe5b9ybc=; b=HAQ4RDquDCGHj0Tm JbYRE5GXW/sG/U0/k3aDiSS4w4KXQVoZ/nGPpRkLLE5fGE9k1yMvlol9mKNn7qVA pPVzwTYerGJdNiK1fs8mmEr0jKK6WUWjYW8dLlcpWca2dijxkV/Zmsqyw08b05VX rw2QE6ZDzHeQdCxoAd4zmjhvLDVTqfjWcqg12G8lXq33bAjMLlmH8+WMN+k0AO0u 9D01rL1kDMXVVvPI40g1m5aLer59O7U1tSmmVVvmME0O/d95sX9KxmqcBFwdA47I +2IRiCEpcenKzYXfzsIWpM6vqw84rLenq4gAliIZ0OsggqqY1go7heaH6e7iFXC5 YACMuA== Received: from nalasppmta05.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 41gy8nwxhc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 10 Sep 2024 12:49:05 +0000 (GMT) Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA05.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 48ACn4aD015158 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 10 Sep 2024 12:49:04 GMT Received: from hu-jsuraj-hyd.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 10 Sep 2024 05:48:57 -0700 From: Suraj Jaiswal To: , Alexandre Torgue , Jose Abreu , "David S. Miller" , Eric Dumazet , "Jakub Kicinski" , Paolo Abeni , Maxime Coquelin , , , , , Prasad Sodagudi , Andrew Halaney , Rob Herring CC: Subject: [PATCH v2] net: stmmac: allocate separate page for buffer Date: Tue, 10 Sep 2024 18:18:41 +0530 Message-ID: <20240910124841.2205629-2-quic_jsuraj@quicinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240910124841.2205629-1-quic_jsuraj@quicinc.com> References: <20240910124841.2205629-1-quic_jsuraj@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: Ynw6_U9wJATRC4XEURlu8QaulYYyNNvS X-Proofpoint-ORIG-GUID: Ynw6_U9wJATRC4XEURlu8QaulYYyNNvS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 clxscore=1011 impostorscore=0 mlxscore=0 bulkscore=0 suspectscore=0 priorityscore=1501 mlxlogscore=999 lowpriorityscore=0 adultscore=0 phishscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2408220000 definitions=main-2409100095 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240910_054919_847656_7F43496A X-CRM114-Status: GOOD ( 21.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently for TSO page is mapped with dma_map_single() and then resulting dma address is referenced (and offset) by multiple descriptors until the whole region is programmed into the descriptors. This makes it possible for stmmac_tx_clean() to dma_unmap() the first of the already processed descriptors, while the rest are still being processed by the DMA engine. This leads to an iommu fault due to the DMA engine using unmapped memory as seen below: arm-smmu 15000000.iommu: Unhandled context fault: fsr=0x402, iova=0xfc401000, fsynr=0x60003, cbfrsynra=0x121, cb=38 Descriptor content: TDES0 TDES1 TDES2 TDES3 317: 0xfc400800 0x0 0x36 0xa02c0b68 318: 0xfc400836 0x0 0xb68 0x90000000 As we can see above descriptor 317 holding a page address and 318 holding the buffer address by adding offset to page addess. Now if 317 descritor is cleaned as part of tx_clean() then we will get SMMU fault if 318 descriptor is getting accessed. To fix this, let's map each descriptor's memory reference individually. This way there's no risk of unmapping a region that's still being referenced by the DMA engine in a later descriptor. Signed-off-by: Suraj Jaiswal --- Changes since v2: - Update commit text with more details. - fixed Reverse xmas tree order issue. Changes since v1: - Fixed function description - Fixed handling of return value. .../net/ethernet/stmicro/stmmac/stmmac_main.c | 63 ++++++++++++------- 1 file changed, 42 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 83b654b7a9fd..98d5a4b64cac 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -4136,21 +4136,25 @@ static bool stmmac_vlan_insert(struct stmmac_priv *priv, struct sk_buff *skb, /** * stmmac_tso_allocator - close entry point of the driver * @priv: driver private structure - * @des: buffer start address + * @addr: Contains either skb frag address or skb->data address * @total_len: total length to fill in descriptors * @last_segment: condition for the last descriptor * @queue: TX queue index + * @is_skb_frag: condition to check whether skb data is part of fragment or not * Description: * This function fills descriptor and request new descriptors according to * buffer length to fill + * This function returns 0 on success else -ERRNO on fail */ -static void stmmac_tso_allocator(struct stmmac_priv *priv, dma_addr_t des, - int total_len, bool last_segment, u32 queue) +static int stmmac_tso_allocator(struct stmmac_priv *priv, void *addr, + int total_len, bool last_segment, u32 queue, bool is_skb_frag) { struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; struct dma_desc *desc; u32 buff_size; int tmp_len; + unsigned char *data = addr; + unsigned int offset = 0; tmp_len = total_len; @@ -4161,20 +4165,44 @@ static void stmmac_tso_allocator(struct stmmac_priv *priv, dma_addr_t des, priv->dma_conf.dma_tx_size); WARN_ON(tx_q->tx_skbuff[tx_q->cur_tx]); + buff_size = tmp_len >= TSO_MAX_BUFF_SIZE ? TSO_MAX_BUFF_SIZE : tmp_len; + if (tx_q->tbs & STMMAC_TBS_AVAIL) desc = &tx_q->dma_entx[tx_q->cur_tx].basic; else desc = &tx_q->dma_tx[tx_q->cur_tx]; - curr_addr = des + (total_len - tmp_len); + offset = total_len - tmp_len; + if (!is_skb_frag) { + curr_addr = dma_map_single(priv->device, data + offset, buff_size, + DMA_TO_DEVICE); + + if (dma_mapping_error(priv->device, curr_addr)) + return -ENOMEM; + + tx_q->tx_skbuff_dma[tx_q->cur_tx].buf = curr_addr; + tx_q->tx_skbuff_dma[tx_q->cur_tx].len = buff_size; + tx_q->tx_skbuff_dma[tx_q->cur_tx].map_as_page = false; + tx_q->tx_skbuff_dma[tx_q->cur_tx].buf_type = STMMAC_TXBUF_T_SKB; + } else { + curr_addr = skb_frag_dma_map(priv->device, addr, offset, + buff_size, + DMA_TO_DEVICE); + + if (dma_mapping_error(priv->device, curr_addr)) + return -ENOMEM; + + tx_q->tx_skbuff_dma[tx_q->cur_tx].buf = curr_addr; + tx_q->tx_skbuff_dma[tx_q->cur_tx].len = buff_size; + tx_q->tx_skbuff_dma[tx_q->cur_tx].map_as_page = true; + tx_q->tx_skbuff_dma[tx_q->cur_tx].buf_type = STMMAC_TXBUF_T_SKB; + } + if (priv->dma_cap.addr64 <= 32) desc->des0 = cpu_to_le32(curr_addr); else stmmac_set_desc_addr(priv, desc, curr_addr); - buff_size = tmp_len >= TSO_MAX_BUFF_SIZE ? - TSO_MAX_BUFF_SIZE : tmp_len; - stmmac_prepare_tso_tx_desc(priv, desc, 0, buff_size, 0, 1, (last_segment) && (tmp_len <= TSO_MAX_BUFF_SIZE), @@ -4182,6 +4210,7 @@ static void stmmac_tso_allocator(struct stmmac_priv *priv, dma_addr_t des, tmp_len -= TSO_MAX_BUFF_SIZE; } + return 0; } static void stmmac_flush_tx_descriptors(struct stmmac_priv *priv, int queue) @@ -4351,25 +4380,17 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) pay_len = 0; } - stmmac_tso_allocator(priv, des, tmp_pay_len, (nfrags == 0), queue); + if (stmmac_tso_allocator(priv, (skb->data + proto_hdr_len), + tmp_pay_len, nfrags == 0, queue, false)) + goto dma_map_err; /* Prepare fragments */ for (i = 0; i < nfrags; i++) { - const skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; - des = skb_frag_dma_map(priv->device, frag, 0, - skb_frag_size(frag), - DMA_TO_DEVICE); - if (dma_mapping_error(priv->device, des)) + if (stmmac_tso_allocator(priv, frag, skb_frag_size(frag), + (i == nfrags - 1), queue, true)) goto dma_map_err; - - stmmac_tso_allocator(priv, des, skb_frag_size(frag), - (i == nfrags - 1), queue); - - tx_q->tx_skbuff_dma[tx_q->cur_tx].buf = des; - tx_q->tx_skbuff_dma[tx_q->cur_tx].len = skb_frag_size(frag); - tx_q->tx_skbuff_dma[tx_q->cur_tx].map_as_page = true; - tx_q->tx_skbuff_dma[tx_q->cur_tx].buf_type = STMMAC_TXBUF_T_SKB; } tx_q->tx_skbuff_dma[tx_q->cur_tx].last_segment = true;