From patchwork Wed Mar 31 15:41:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ong Boon Leong X-Patchwork-Id: 12175629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94D26C433ED for ; Wed, 31 Mar 2021 15:40:37 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3C1A760FED for ; Wed, 31 Mar 2021 15:40:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3C1A760FED Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=TlShw+i8ZMHF6oIqmYdchrC+DW9iv1OTGOoiTIcCJm4=; b=pkqqUUUAQ+h3ORwjGleq3Z7KY aA1c/PMG6Jqi6NrjsJ8kShCCEK48DnrghuUdrQBT5q85lYO+4ZjzgKO8igp3vccVXiVS7K/uvhtJp FF4VNrqNegIp5h4zSZ4/VieC4QfHlDmLDZS2+4ZafsFKZtkItZXrN3BJTPB23RPxUQ2ruNKDKiiU+ 07BVOJgzwfvNmFzU5FU+QuRTFFak2S5W0xEYDbBbbYdaERD88fO5ZjJ83Ae0gyPaR5z/6w82Hllmy 8jyUspRvdRFSR2Ck0wgJQ7Ix5OVVM2q3G/Hy13S3M0Q5sgCTeHwROMJO9yOdFY0KnVN+8GOIU12Vg NP3Z9Yz2w==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lRcvs-006vj0-53; Wed, 31 Mar 2021 15:39:00 +0000 Received: from mga02.intel.com ([134.134.136.20]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lRcur-006vPS-Ed for linux-arm-kernel@lists.infradead.org; Wed, 31 Mar 2021 15:38:00 +0000 IronPort-SDR: 83QvHzs/5ZwVGzQTtIigziOpqs+u+hJSOfzljcn9Ja1fQqNvnH+Wm3NirQ8pk6lLFMyYJXKoUU UeU6DAmF0GAw== X-IronPort-AV: E=McAfee;i="6000,8403,9940"; a="179159255" X-IronPort-AV: E=Sophos;i="5.81,293,1610438400"; d="scan'208";a="179159255" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2021 08:37:56 -0700 IronPort-SDR: 7FQWBohmvSmWmonY5qIFURU/vmhYx0chzAIGevWpMsTtaye1fB91VGDKoVL2kvHJC0cluebXxg 2V7o4C1uBmWw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,293,1610438400"; d="scan'208";a="418729189" Received: from glass.png.intel.com ([10.158.65.59]) by orsmga008.jf.intel.com with ESMTP; 31 Mar 2021 08:37:51 -0700 From: Ong Boon Leong To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S . Miller" , Jakub Kicinski , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Maxime Coquelin , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Ong Boon Leong Subject: [PATCH net-next v3 6/6] net: stmmac: Add support for XDP_REDIRECT action Date: Wed, 31 Mar 2021 23:41:35 +0800 Message-Id: <20210331154135.8507-7-boon.leong.ong@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331154135.8507-1-boon.leong.ong@intel.com> References: <20210331154135.8507-1-boon.leong.ong@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210331_163758_012519_A9BBB6A9 X-CRM114-Status: GOOD ( 22.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch adds the support of XDP_REDIRECT to another remote cpu for further action. It also implements ndo_xdp_xmit ops, enabling the driver to transmit packets forwarded to it by XDP program running on another interface. This patch has been tested using "xdp_redirect_cpu" for XDP_REDIRECT + drop testing. It also been tested with "xdp_redirect" sample app which can be used to exercise ndo_xdp_xmit ops. The burst traffics are generated using pktgen_sample03_burst_single_flow.sh in samples/pktgen directory. v3: Added 'nq->trans_start = jiffies' to avoid TX time-out as we are sharing TX queue between slow path and XDP. Thanks to Jakub Kicinski for point out. Signed-off-by: Ong Boon Leong --- drivers/net/ethernet/stmicro/stmmac/stmmac.h | 1 + .../net/ethernet/stmicro/stmmac/stmmac_main.c | 98 +++++++++++++++++-- 2 files changed, 89 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h index a93e22a6be59..c49debb62b05 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h @@ -39,6 +39,7 @@ struct stmmac_resources { enum stmmac_txbuf_type { STMMAC_TXBUF_T_SKB, STMMAC_TXBUF_T_XDP_TX, + STMMAC_TXBUF_T_XDP_NDO, }; struct stmmac_tx_info { diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 35e9a738d663..370b3c084dda 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -72,6 +72,7 @@ MODULE_PARM_DESC(phyaddr, "Physical device address"); #define STMMAC_XDP_PASS 0 #define STMMAC_XDP_CONSUMED BIT(0) #define STMMAC_XDP_TX BIT(1) +#define STMMAC_XDP_REDIRECT BIT(2) static int flow_ctrl = FLOW_AUTO; module_param(flow_ctrl, int, 0644); @@ -1458,7 +1459,8 @@ static void stmmac_free_tx_buffer(struct stmmac_priv *priv, u32 queue, int i) } if (tx_q->xdpf[i] && - tx_q->tx_skbuff_dma[i].buf_type == STMMAC_TXBUF_T_XDP_TX) { + (tx_q->tx_skbuff_dma[i].buf_type == STMMAC_TXBUF_T_XDP_TX || + tx_q->tx_skbuff_dma[i].buf_type == STMMAC_TXBUF_T_XDP_NDO)) { xdp_return_frame(tx_q->xdpf[i]); tx_q->xdpf[i] = NULL; } @@ -2220,7 +2222,8 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) struct dma_desc *p; int status; - if (tx_q->tx_skbuff_dma[entry].buf_type == STMMAC_TXBUF_T_XDP_TX) { + if (tx_q->tx_skbuff_dma[entry].buf_type == STMMAC_TXBUF_T_XDP_TX || + tx_q->tx_skbuff_dma[entry].buf_type == STMMAC_TXBUF_T_XDP_NDO) { xdpf = tx_q->xdpf[entry]; skb = NULL; } else if (tx_q->tx_skbuff_dma[entry].buf_type == STMMAC_TXBUF_T_SKB) { @@ -2292,6 +2295,12 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) tx_q->xdpf[entry] = NULL; } + if (xdpf && + tx_q->tx_skbuff_dma[entry].buf_type == STMMAC_TXBUF_T_XDP_NDO) { + xdp_return_frame(xdpf); + tx_q->xdpf[entry] = NULL; + } + if (tx_q->tx_skbuff_dma[entry].buf_type == STMMAC_TXBUF_T_SKB) { if (likely(skb)) { pkts_compl++; @@ -4246,10 +4255,9 @@ static unsigned int stmmac_rx_buf2_len(struct stmmac_priv *priv, } static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue, - struct xdp_frame *xdpf) + struct xdp_frame *xdpf, bool dma_map) { struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; - struct page *page = virt_to_page(xdpf->data); unsigned int entry = tx_q->cur_tx; struct dma_desc *tx_desc; dma_addr_t dma_addr; @@ -4265,12 +4273,23 @@ static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue, else tx_desc = tx_q->dma_tx + entry; - dma_addr = page_pool_get_dma_addr(page) + sizeof(*xdpf) + - xdpf->headroom; - dma_sync_single_for_device(priv->device, dma_addr, - xdpf->len, DMA_BIDIRECTIONAL); + if (dma_map) { + dma_addr = dma_map_single(priv->device, xdpf->data, + xdpf->len, DMA_TO_DEVICE); + if (dma_mapping_error(priv->device, dma_addr)) + return STMMAC_XDP_CONSUMED; + + tx_q->tx_skbuff_dma[entry].buf_type = STMMAC_TXBUF_T_XDP_NDO; + } else { + struct page *page = virt_to_page(xdpf->data); - tx_q->tx_skbuff_dma[entry].buf_type = STMMAC_TXBUF_T_XDP_TX; + dma_addr = page_pool_get_dma_addr(page) + sizeof(*xdpf) + + xdpf->headroom; + dma_sync_single_for_device(priv->device, dma_addr, + xdpf->len, DMA_BIDIRECTIONAL); + + tx_q->tx_skbuff_dma[entry].buf_type = STMMAC_TXBUF_T_XDP_TX; + } tx_q->tx_skbuff_dma[entry].buf = dma_addr; tx_q->tx_skbuff_dma[entry].map_as_page = false; @@ -4339,7 +4358,7 @@ static int stmmac_xdp_xmit_back(struct stmmac_priv *priv, __netif_tx_lock(nq, cpu); /* Avoids TX time-out as we are sharing with slow path */ nq->trans_start = jiffies; - res = stmmac_xdp_xmit_xdpf(priv, queue, xdpf); + res = stmmac_xdp_xmit_xdpf(priv, queue, xdpf, false); if (res == STMMAC_XDP_TX) { stmmac_flush_tx_descriptors(priv, queue); stmmac_tx_timer_arm(priv, queue); @@ -4372,6 +4391,12 @@ static struct sk_buff *stmmac_xdp_run_prog(struct stmmac_priv *priv, case XDP_TX: res = stmmac_xdp_xmit_back(priv, xdp); break; + case XDP_REDIRECT: + if (xdp_do_redirect(priv->dev, xdp, prog) < 0) + res = STMMAC_XDP_CONSUMED; + else + res = STMMAC_XDP_REDIRECT; + break; default: bpf_warn_invalid_xdp_action(act); fallthrough; @@ -4407,6 +4432,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) unsigned int desc_size; struct sk_buff *skb = NULL; struct xdp_buff xdp; + int xdp_status = 0; int buf_sz; dma_dir = page_pool_get_dma_dir(rx_q->page_pool); @@ -4576,6 +4602,12 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) skb = NULL; count++; continue; + } else if (xdp_res & STMMAC_XDP_REDIRECT) { + xdp_status |= xdp_res; + buf->page = NULL; + skb = NULL; + count++; + continue; } } } @@ -4658,6 +4690,9 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) rx_q->state.len = len; } + if (xdp_status & STMMAC_XDP_REDIRECT) + xdp_do_flush(); + stmmac_rx_refill(priv, queue); priv->xstats.rx_pkt_n += count; @@ -5584,6 +5619,48 @@ static int stmmac_bpf(struct net_device *dev, struct netdev_bpf *bpf) } } +static int stmmac_xdp_xmit(struct net_device *dev, int num_frames, + struct xdp_frame **frames, u32 flags) +{ + struct stmmac_priv *priv = netdev_priv(dev); + int cpu = smp_processor_id(); + struct netdev_queue *nq; + int i, nxmit = 0; + int queue; + + if (unlikely(test_bit(STMMAC_DOWN, &priv->state))) + return -ENETDOWN; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + queue = stmmac_xdp_get_tx_queue(priv, cpu); + nq = netdev_get_tx_queue(priv->dev, queue); + + __netif_tx_lock(nq, cpu); + /* Avoids TX time-out as we are sharing with slow path */ + nq->trans_start = jiffies; + + for (i = 0; i < num_frames; i++) { + int res; + + res = stmmac_xdp_xmit_xdpf(priv, queue, frames[i], true); + if (res == STMMAC_XDP_CONSUMED) + break; + + nxmit++; + } + + if (flags & XDP_XMIT_FLUSH) { + stmmac_flush_tx_descriptors(priv, queue); + stmmac_tx_timer_arm(priv, queue); + } + + __netif_tx_unlock(nq); + + return nxmit; +} + static const struct net_device_ops stmmac_netdev_ops = { .ndo_open = stmmac_open, .ndo_start_xmit = stmmac_xmit, @@ -5603,6 +5680,7 @@ static const struct net_device_ops stmmac_netdev_ops = { .ndo_vlan_rx_add_vid = stmmac_vlan_rx_add_vid, .ndo_vlan_rx_kill_vid = stmmac_vlan_rx_kill_vid, .ndo_bpf = stmmac_bpf, + .ndo_xdp_xmit = stmmac_xdp_xmit, }; static void stmmac_reset_subtask(struct stmmac_priv *priv)