From patchwork Wed Dec 4 05:56:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharat Bhushan X-Patchwork-Id: 13893257 X-Patchwork-Delegate: kuba@kernel.org Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CEDE71DA5F; Wed, 4 Dec 2024 05:57:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733291850; cv=none; b=cyCP9//VZ8p1brIri7tW0cJVGvfCB0DPnRU2YTNZmP0T71wpZVzbjEx1eBcL/UQQKmTOKmwqsnqeVnhHi5lhgG/l5xbnz5Gf4mxVRsXXVRVCpOgL/uxJl0Fvy3Jrwjb7Oq4yp2aKSw4jeaKJLdL0VRCvodjytjaxP+0ta0Y1Cus= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733291850; c=relaxed/simple; bh=HuvTXqsRA4N97G0Rqta5+oxCTeUV4Gie9FTJal2gZO0=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=mrXqB2tbi3XND7Uhhczo25xSE//1dwlsdanzRVQcJZLMi9q0q2Rdf9pBal1Qy7YM8jdDLBlj3LMxU/LJxdVoiXfEBRwJAt4l1VH/ndxekK/+fHp/OsLtB7Xp4tdFd3zDDyXFJSqyTY/TsZRt3/KfJIupawMmNGUlu/NbQI5Bpto= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=EnBoF/QW; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="EnBoF/QW" Received: from pps.filterd (m0431383.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 4B43rU7f024581; Tue, 3 Dec 2024 21:57:18 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=M 9HvErn7WuVfxbc/WGapQI+GNQGI3J4zEyW7HbTRyQ4=; b=EnBoF/QWa3VhTqN0w Bim/GNV/OjUo3gNzW1AKzGihqxhmImEszqqBBclzy2gHfz1LUr1RpNckKA94/o3C b6gfyKToOaBGpoBk8R0E+iFCUIPPzooSJschp9fPmpMbZrEKPfoUxPywgyd2w3jv u6D52AvGBh2KRISLv+bcP/R/fw1nldDpnfSF5evyK25Ei/m7BzjvQM7NB1+2cJaR RaTAPlP75XX0dv7JQZc+roFgIlqRILdc3Edt/VSBQkRavFGWIEHip0NjlsWwe7XC Y6rpD+oBmQPM34iE126hmf5PHEMxT12WluT6NT1XjaD3fiAMpXoeI6aFN4+7rAHz vS/Nw== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 43afjw86ck-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 03 Dec 2024 21:57:17 -0800 (PST) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 21:57:16 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 3 Dec 2024 21:57:15 -0800 Received: from bharat-OptiPlex-Tower-Plus-7020.. (unknown [10.28.34.254]) by maili.marvell.com (Postfix) with ESMTP id 8E71C3F7076; Tue, 3 Dec 2024 21:57:11 -0800 (PST) From: Bharat Bhushan To: , , , , , , , , , , , , , , , Subject: [net-next PATCH v10 2/8] octeontx2-pf: Move skb fragment map/unmap to common code Date: Wed, 4 Dec 2024 11:26:53 +0530 Message-ID: <20241204055659.1700459-3-bbhushan2@marvell.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241204055659.1700459-1-bbhushan2@marvell.com> References: <20241204055659.1700459-1-bbhushan2@marvell.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: eHNH1pgVmpnrQryb8JS2gY5guc-ihh2J X-Proofpoint-ORIG-GUID: eHNH1pgVmpnrQryb8JS2gY5guc-ihh2J X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.687,Hydra:6.0.235,FMLib:17.0.607.475 definitions=2020-10-13_15,2020-10-13_02,2020-04-07_01 X-Patchwork-Delegate: kuba@kernel.org Move skb fragment map/unmap function to common file so as to reuse same for outbound IPsec crypto offload Signed-off-by: Bharat Bhushan --- .../marvell/octeontx2/nic/otx2_common.c | 46 +++++++++++++++++++ .../marvell/octeontx2/nic/otx2_common.h | 4 ++ .../marvell/octeontx2/nic/otx2_txrx.c | 46 ------------------- 3 files changed, 50 insertions(+), 46 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 523ecb798a7a..971115a5d2cc 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -10,6 +10,7 @@ #include #include #include +#include #include "otx2_reg.h" #include "otx2_common.h" @@ -1947,3 +1948,48 @@ EXPORT_SYMBOL(otx2_mbox_up_handler_ ## _fn_name); MBOX_UP_CGX_MESSAGES MBOX_UP_MCS_MESSAGES #undef M + +dma_addr_t otx2_dma_map_skb_frag(struct otx2_nic *pfvf, + struct sk_buff *skb, int seg, int *len) +{ + enum dma_data_direction dir = DMA_TO_DEVICE; + const skb_frag_t *frag; + struct page *page; + int offset; + + /* Crypto hardware need write permission for ipsec crypto offload */ + if (unlikely(xfrm_offload(skb))) { + dir = DMA_BIDIRECTIONAL; + skb = skb_unshare(skb, GFP_ATOMIC); + } + + /* First segment is always skb->data */ + if (!seg) { + page = virt_to_page(skb->data); + offset = offset_in_page(skb->data); + *len = skb_headlen(skb); + } else { + frag = &skb_shinfo(skb)->frags[seg - 1]; + page = skb_frag_page(frag); + offset = skb_frag_off(frag); + *len = skb_frag_size(frag); + } + return otx2_dma_map_page(pfvf, page, offset, *len, dir); +} + +void otx2_dma_unmap_skb_frags(struct otx2_nic *pfvf, struct sg_list *sg) +{ + enum dma_data_direction dir = DMA_TO_DEVICE; + struct sk_buff *skb = NULL; + int seg; + + skb = (struct sk_buff *)sg->skb; + if (unlikely(xfrm_offload(skb))) + dir = DMA_BIDIRECTIONAL; + + for (seg = 0; seg < sg->num_segs; seg++) { + otx2_dma_unmap_page(pfvf, sg->dma_addr[seg], + sg->size[seg], dir); + } + sg->num_segs = 0; +} diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index 566848663fea..bb78d825046d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -1149,4 +1149,8 @@ static inline int mcam_entry_cmp(const void *a, const void *b) { return *(u16 *)a - *(u16 *)b; } + +dma_addr_t otx2_dma_map_skb_frag(struct otx2_nic *pfvf, + struct sk_buff *skb, int seg, int *len); +void otx2_dma_unmap_skb_frags(struct otx2_nic *pfvf, struct sg_list *sg); #endif /* OTX2_COMMON_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c index 3b0457e52a6a..a49041e55c33 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c @@ -11,7 +11,6 @@ #include #include #include -#include #include "otx2_reg.h" #include "otx2_common.h" @@ -81,51 +80,6 @@ static unsigned int frag_num(unsigned int i) #endif } -static dma_addr_t otx2_dma_map_skb_frag(struct otx2_nic *pfvf, - struct sk_buff *skb, int seg, int *len) -{ - enum dma_data_direction dir = DMA_TO_DEVICE; - const skb_frag_t *frag; - struct page *page; - int offset; - - /* Crypto hardware need write permission for ipsec crypto offload */ - if (unlikely(xfrm_offload(skb))) { - dir = DMA_BIDIRECTIONAL; - skb = skb_unshare(skb, GFP_ATOMIC); - } - - /* First segment is always skb->data */ - if (!seg) { - page = virt_to_page(skb->data); - offset = offset_in_page(skb->data); - *len = skb_headlen(skb); - } else { - frag = &skb_shinfo(skb)->frags[seg - 1]; - page = skb_frag_page(frag); - offset = skb_frag_off(frag); - *len = skb_frag_size(frag); - } - return otx2_dma_map_page(pfvf, page, offset, *len, dir); -} - -static void otx2_dma_unmap_skb_frags(struct otx2_nic *pfvf, struct sg_list *sg) -{ - enum dma_data_direction dir = DMA_TO_DEVICE; - struct sk_buff *skb = NULL; - int seg; - - skb = (struct sk_buff *)sg->skb; - if (unlikely(xfrm_offload(skb))) - dir = DMA_BIDIRECTIONAL; - - for (seg = 0; seg < sg->num_segs; seg++) { - otx2_dma_unmap_page(pfvf, sg->dma_addr[seg], - sg->size[seg], dir); - } - sg->num_segs = 0; -} - static void otx2_xdp_snd_pkt_handler(struct otx2_nic *pfvf, struct otx2_snd_queue *sq, struct nix_cqe_tx_s *cqe)