From patchwork Tue Jun 18 07:56:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13701930 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E9FC719D060; Tue, 18 Jun 2024 07:56:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.119 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697410; cv=none; b=H9bvomclcYYGnGWWc6E45W6Jp9Km43grk64QAJeZzkg/tgIwvemY2H2REjB9YIJ66IARQyRk/rvoVkndTuvqu3zReFI7IgyKHLwusvrhKcVP74I9Y02aFCKOeX5a6zotdiK7nMWHc244xybySR/x98NBN09ClkZdPS/qdJBjpEM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697410; c=relaxed/simple; bh=jl6UJTABbEdU+rA4QZQp0/vIyzSqNdFSOq+02rXhgL8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=EjBJGWdB9re5zRLM0CSbat4x+1OReKdK/mMCf/UoLvYULPcVd4kkF7Lboo4zN7dxIVc/DB4xCSwpLTDHq4kNfu+ne0OCS4559vzAl5W59KaRucq51iH4I/QvDm7vOhoBENRTmqnuaupAtUhQF5/blqKXPHdAH89wCmLZmRGRLbo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=iiiF6OeQ; arc=none smtp.client-ip=115.124.30.119 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="iiiF6OeQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718697405; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=QKx8SD5y5bxYJMN+tvUXEDSoDlk2xgxFxidchf2RPwc=; b=iiiF6OeQ/VavwqihtLwsqjy576Fql69cLD337grweIl0lpodUezY8w6butAWcA5zXRxyGLekzXz5RRGx3WpuHMM8IdvrVpu5JVR87SCVemaQFRIL3v9T/NRKLue8mdNgzO4F7kXsR67EV6/mhxJyFhEske9m/Ek0NzMUQFE4Rfo= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033045046011;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0W8jRu8U_1718697404; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W8jRu8U_1718697404) by smtp.aliyun-inc.com; Tue, 18 Jun 2024 15:56:44 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux.dev, bpf@vger.kernel.org Subject: [PATCH net-next v6 01/10] virtio_net: separate virtnet_rx_resize() Date: Tue, 18 Jun 2024 15:56:34 +0800 Message-Id: <20240618075643.24867-2-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> References: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 8baa0af3684b X-Patchwork-Delegate: kuba@kernel.org This patch separates two sub-functions from virtnet_rx_resize(): * virtnet_rx_pause * virtnet_rx_resume Then the subsequent reset rx for xsk can share these two functions. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 29 +++++++++++++++++++++-------- 1 file changed, 21 insertions(+), 8 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 61a57d134544..8ad158bbf188 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2609,28 +2609,41 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) return NETDEV_TX_OK; } -static int virtnet_rx_resize(struct virtnet_info *vi, - struct receive_queue *rq, u32 ring_num) +static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq) { bool running = netif_running(vi->dev); - int err, qindex; - - qindex = rq - vi->rq; if (running) { napi_disable(&rq->napi); cancel_work_sync(&rq->dim.work); } +} - err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_unmap_free_buf); - if (err) - netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err); +static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq) +{ + bool running = netif_running(vi->dev); if (!try_fill_recv(vi, rq, GFP_KERNEL)) schedule_delayed_work(&vi->refill, 0); if (running) virtnet_napi_enable(rq->vq, &rq->napi); +} + +static int virtnet_rx_resize(struct virtnet_info *vi, + struct receive_queue *rq, u32 ring_num) +{ + int err, qindex; + + qindex = rq - vi->rq; + + virtnet_rx_pause(vi, rq); + + err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_unmap_free_buf); + if (err) + netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err); + + virtnet_rx_resume(vi, rq); return err; } From patchwork Tue Jun 18 07:56:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13701931 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 377D219D063; Tue, 18 Jun 2024 07:56:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.100 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697411; cv=none; b=OeKa8KHECh+oiYq7nLL6DKbmwH0kQGjh6I+Bb/gg5/wrTjmgrKQ1kVYHz7oPA1MmtNH/P8Vgag6ZZ2DUeZb7DJRLM1jjjZC79KPmcnlq9TOd8acL4o/7CaDe9gCFOrlzs9J8jMRNZNfat8HAGoTbPR0WJQSDaoMVjVj1t2R21VA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697411; c=relaxed/simple; bh=IbrWDD/M6UuraqnIRgfvTJ/ldPjzXo4zDvUASkwkvi0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dpmS4yLhRiLBSlWi4xO+LbjzcC8JgBbiVw9ny+KusGKEWL1QywjKVpr16peTr22Ek1gKXoT/9Ub/bL9oFt+2RZE16F29OK8KFLNma0WuIO8v5QqS8RgCt4w4nSf8vo3R8TZ81xKzIykJmRPTqzNYth/A1mACEqSgiWJdqIwxAJQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=UDrGUMpv; arc=none smtp.client-ip=115.124.30.100 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="UDrGUMpv" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718697406; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=0+975V3JDgAw2XntSD3QdhSCqZPFbCdxtW+HMDHVqrc=; b=UDrGUMpv5Z2M2UrzT5wSlX2zW8pPE22mDFXI1f/iTXhx27rHpkNeVsfgCMBIP5QV2qym8WhjoOLMzNJ72nTkUZVMuvpKgiLP5cE2jiRiTJoVIAWHTJoeRLb8Z9olb944gn8QyiwmPgi58gy8i1/n0AKfLyIAeF0kF/OP6/b4awU= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033068173054;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0W8jSGAN_1718697405; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W8jSGAN_1718697405) by smtp.aliyun-inc.com; Tue, 18 Jun 2024 15:56:46 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux.dev, bpf@vger.kernel.org Subject: [PATCH net-next v6 02/10] virtio_net: separate virtnet_tx_resize() Date: Tue, 18 Jun 2024 15:56:35 +0800 Message-Id: <20240618075643.24867-3-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> References: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 8baa0af3684b X-Patchwork-Delegate: kuba@kernel.org This patch separates two sub-functions from virtnet_tx_resize(): * virtnet_tx_pause * virtnet_tx_resume Then the subsequent virtnet_tx_reset() can share these two functions. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 35 +++++++++++++++++++++++++++++------ 1 file changed, 29 insertions(+), 6 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 8ad158bbf188..1ac5f472d4ef 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2647,12 +2647,11 @@ static int virtnet_rx_resize(struct virtnet_info *vi, return err; } -static int virtnet_tx_resize(struct virtnet_info *vi, - struct send_queue *sq, u32 ring_num) +static void virtnet_tx_pause(struct virtnet_info *vi, struct send_queue *sq) { bool running = netif_running(vi->dev); struct netdev_queue *txq; - int err, qindex; + int qindex; qindex = sq - vi->sq; @@ -2673,10 +2672,17 @@ static int virtnet_tx_resize(struct virtnet_info *vi, netif_stop_subqueue(vi->dev, qindex); __netif_tx_unlock_bh(txq); +} - err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf); - if (err) - netdev_err(vi->dev, "resize tx fail: tx queue index: %d err: %d\n", qindex, err); +static void virtnet_tx_resume(struct virtnet_info *vi, struct send_queue *sq) +{ + bool running = netif_running(vi->dev); + struct netdev_queue *txq; + int qindex; + + qindex = sq - vi->sq; + + txq = netdev_get_tx_queue(vi->dev, qindex); __netif_tx_lock_bh(txq); sq->reset = false; @@ -2685,6 +2691,23 @@ static int virtnet_tx_resize(struct virtnet_info *vi, if (running) virtnet_napi_tx_enable(vi, sq->vq, &sq->napi); +} + +static int virtnet_tx_resize(struct virtnet_info *vi, struct send_queue *sq, + u32 ring_num) +{ + int qindex, err; + + qindex = sq - vi->sq; + + virtnet_tx_pause(vi, sq); + + err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf); + if (err) + netdev_err(vi->dev, "resize tx fail: tx queue index: %d err: %d\n", qindex, err); + + virtnet_tx_resume(vi, sq); + return err; } From patchwork Tue Jun 18 07:56:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13701932 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C475219D080; Tue, 18 Jun 2024 07:56:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.119 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697411; cv=none; b=kMQfI2JMCeCB7szLA2haF0uh8Adh3V/o5+NiJ5dkLgWAWdm6C9hBm5T+wdFY+HoNIOZp/ETGGuGh26fV9s9A+80YjEdaRhh+Y7RdcUyhagV/7hFLh4d7zvRUKc7rIv3akeXvJeFWyCXO1e+MJZXDpzbMsj6bvcUhKwHuugBS8gs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697411; c=relaxed/simple; bh=wC81uKyMU8FVabNLSZLUnf7sDbBjual2M4gAQRMfykE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dgNipxjZvOhEUwZgJ8LD+JrxPdYqWXMaMN37d18XbffrO1eDedIPNXB/7uvr2px8+Y0+clxiRRmKaHiwtimPTIwJLdBt+XHDEKouqjOn2sMbphTXDIoz0g4VVkrzypPaNYElV8idn3ml6hmYdDLMIQer1Zq/UGgqobMNMuEzhbE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=QAUdZH34; arc=none smtp.client-ip=115.124.30.119 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="QAUdZH34" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718697407; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=mmxFIgim3ZhOhDUZPVGmbzDuUS95UwiWuDWJu7BFVN4=; b=QAUdZH34XkvVW6UIG/bxjosx5058htnW3hJV4fwsG0FIb1sTRsl/dsQzNA0stqP1tw9YnQVW/D7EZx9tGRnWnVc+mVLzJHmZ6VpocKX9TiRRS7c5SzI+YV2JsJ5ADL6o3UQwhHzi1y2CHEK3OfBeUBs8G/MWPGP5X8vBssykEwA= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R531e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067109;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0W8jRu9P_1718697406; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W8jRu9P_1718697406) by smtp.aliyun-inc.com; Tue, 18 Jun 2024 15:56:47 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux.dev, bpf@vger.kernel.org Subject: [PATCH net-next v6 03/10] virtio_net: separate receive_buf Date: Tue, 18 Jun 2024 15:56:36 +0800 Message-Id: <20240618075643.24867-4-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> References: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 8baa0af3684b X-Patchwork-Delegate: kuba@kernel.org This commit separates the function receive_buf(), then we wrap the logic of handling the skb to an independent function virtnet_receive_done(). The subsequent commit will reuse it. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 56 +++++++++++++++++++++++----------------- 1 file changed, 32 insertions(+), 24 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 1ac5f472d4ef..59fcaeb6ea81 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1935,32 +1935,11 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash, skb_set_hash(skb, __le32_to_cpu(hdr_hash->hash_value), rss_hash_type); } -static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq, - void *buf, unsigned int len, void **ctx, - unsigned int *xdp_xmit, - struct virtnet_rq_stats *stats) +static void virtnet_receive_done(struct virtnet_info *vi, struct receive_queue *rq, + struct sk_buff *skb) { - struct net_device *dev = vi->dev; - struct sk_buff *skb; struct virtio_net_common_hdr *hdr; - - if (unlikely(len < vi->hdr_len + ETH_HLEN)) { - pr_debug("%s: short packet %i\n", dev->name, len); - DEV_STATS_INC(dev, rx_length_errors); - virtnet_rq_free_buf(vi, rq, buf); - return; - } - - if (vi->mergeable_rx_bufs) - skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit, - stats); - else if (vi->big_packets) - skb = receive_big(dev, vi, rq, buf, len, stats); - else - skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit, stats); - - if (unlikely(!skb)) - return; + struct net_device *dev = vi->dev; hdr = skb_vnet_common_hdr(skb); if (dev->features & NETIF_F_RXHASH && vi->has_rss_hash_report) @@ -1990,6 +1969,35 @@ static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq, dev_kfree_skb(skb); } +static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq, + void *buf, unsigned int len, void **ctx, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + struct net_device *dev = vi->dev; + struct sk_buff *skb; + + if (unlikely(len < vi->hdr_len + ETH_HLEN)) { + pr_debug("%s: short packet %i\n", dev->name, len); + DEV_STATS_INC(dev, rx_length_errors); + virtnet_rq_free_buf(vi, rq, buf); + return; + } + + if (vi->mergeable_rx_bufs) + skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit, + stats); + else if (vi->big_packets) + skb = receive_big(dev, vi, rq, buf, len, stats); + else + skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit, stats); + + if (unlikely(!skb)) + return; + + virtnet_receive_done(vi, rq, skb); +} + /* Unlike mergeable buffers, all buffers are allocated to the * same size, except for the headroom. For this reason we do * not need to use mergeable_len_to_ctx here - it is enough From patchwork Tue Jun 18 07:56:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13701936 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BE96519DFB9; Tue, 18 Jun 2024 07:56:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.113 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697417; cv=none; b=WKDteLHXTokXEZgoybVTPWl8O3VSaM3anCYDjg2CF7ZNJIXGA8an1Igp02+JN9gXgCTpzoyc1hjR/iFmO7K1qV3dD+Wknnct4nb6gu2Rtego2rCa8wP4j66g7gBfnZpH1Y+AlLyWhwLPX3o98JJ96d5uNs5z20A9P2KiOC/kYio= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697417; c=relaxed/simple; bh=kilj5kk4kKXl2/4skZHgx7B3EYRRBYfEqsBt0/zarSM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Y9Jw3mBr0I1kLRaHPnqQRVzoQp2XEIHYbusPTV9oAJArfaaxMRt5MGr7bJiBvu9XfqhEbj5wuw0ooRIk/waRjgpYJ30i6OS4MTiOeVCdA2hbhiEbT8gAJG8VA6zOeWGk1sWF310t1Qvtb0FpyRA/9Bvu/i5/gW5lp2c8069Eh9o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=dbgxFbGz; arc=none smtp.client-ip=115.124.30.113 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="dbgxFbGz" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718697408; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=W3RssxF2KZyR3zaCiZlt67dFy75tEehRXdbAAZlC0EU=; b=dbgxFbGzoVQC9827urB/5uu40tGepV05ZOt5ytVbQNjkPg3HSfnD5mMPUWUcqvI3w9Y2TjPZhy0ar5Th9aTFl13PydxNDDyA8+SKfzWpF1amra4E99LVhgIuu7Nz97oJxlwsq+K3U5swgFgiABSytJSIRL3ig3BPZPv+HS9UfpA= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067109;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0W8jQnH3_1718697407; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W8jQnH3_1718697407) by smtp.aliyun-inc.com; Tue, 18 Jun 2024 15:56:47 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux.dev, bpf@vger.kernel.org Subject: [PATCH net-next v6 04/10] virtio_net: separate receive_mergeable Date: Tue, 18 Jun 2024 15:56:37 +0800 Message-Id: <20240618075643.24867-5-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> References: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 8baa0af3684b X-Patchwork-Delegate: kuba@kernel.org This commit separates the function receive_mergeable(), put the logic of appending frag to the skb as an independent function. The subsequent commit will reuse it. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 77 ++++++++++++++++++++++++---------------- 1 file changed, 47 insertions(+), 30 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 59fcaeb6ea81..df885cdbe658 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1788,6 +1788,49 @@ static struct sk_buff *receive_mergeable_xdp(struct net_device *dev, return NULL; } +static struct sk_buff *virtnet_skb_append_frag(struct sk_buff *head_skb, + struct sk_buff *curr_skb, + struct page *page, void *buf, + int len, int truesize) +{ + int num_skb_frags; + int offset; + + num_skb_frags = skb_shinfo(curr_skb)->nr_frags; + if (unlikely(num_skb_frags == MAX_SKB_FRAGS)) { + struct sk_buff *nskb = alloc_skb(0, GFP_ATOMIC); + + if (unlikely(!nskb)) + return NULL; + + if (curr_skb == head_skb) + skb_shinfo(curr_skb)->frag_list = nskb; + else + curr_skb->next = nskb; + curr_skb = nskb; + head_skb->truesize += nskb->truesize; + num_skb_frags = 0; + } + + if (curr_skb != head_skb) { + head_skb->data_len += len; + head_skb->len += len; + head_skb->truesize += truesize; + } + + offset = buf - page_address(page); + if (skb_can_coalesce(curr_skb, num_skb_frags, page, offset)) { + put_page(page); + skb_coalesce_rx_frag(curr_skb, num_skb_frags - 1, + len, truesize); + } else { + skb_add_rx_frag(curr_skb, num_skb_frags, page, + offset, len, truesize); + } + + return curr_skb; +} + static struct sk_buff *receive_mergeable(struct net_device *dev, struct virtnet_info *vi, struct receive_queue *rq, @@ -1837,8 +1880,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (unlikely(!curr_skb)) goto err_skb; while (--num_buf) { - int num_skb_frags; - buf = virtnet_rq_get_buf(rq, &len, &ctx); if (unlikely(!buf)) { pr_debug("%s: rx error: %d buffers out of %d missing\n", @@ -1863,34 +1904,10 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_skb; } - num_skb_frags = skb_shinfo(curr_skb)->nr_frags; - if (unlikely(num_skb_frags == MAX_SKB_FRAGS)) { - struct sk_buff *nskb = alloc_skb(0, GFP_ATOMIC); - - if (unlikely(!nskb)) - goto err_skb; - if (curr_skb == head_skb) - skb_shinfo(curr_skb)->frag_list = nskb; - else - curr_skb->next = nskb; - curr_skb = nskb; - head_skb->truesize += nskb->truesize; - num_skb_frags = 0; - } - if (curr_skb != head_skb) { - head_skb->data_len += len; - head_skb->len += len; - head_skb->truesize += truesize; - } - offset = buf - page_address(page); - if (skb_can_coalesce(curr_skb, num_skb_frags, page, offset)) { - put_page(page); - skb_coalesce_rx_frag(curr_skb, num_skb_frags - 1, - len, truesize); - } else { - skb_add_rx_frag(curr_skb, num_skb_frags, page, - offset, len, truesize); - } + curr_skb = virtnet_skb_append_frag(head_skb, curr_skb, page, + buf, len, truesize); + if (!curr_skb) + goto err_skb; } ewma_pkt_len_add(&rq->mrg_avg_pkt_len, head_skb->len); From patchwork Tue Jun 18 07:56:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13701933 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7BFB119DF52; Tue, 18 Jun 2024 07:56:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697414; cv=none; b=clzQV4L6TO8Oisub/3lQthFiSF3OplI5j4dFFS+o39Dfyfn4iWJYUW//X8yHW0Tv8++fSk7LA/WVDsbWrqnWw5AAYDS3IDJmqRKgd/KV9agf2a9BrZXEfXfqhlMT447hTV6lHAr/dxtkPUrxJQdAs5FRq5KCHVsTauqbtcLC+hE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697414; c=relaxed/simple; bh=81jKwSP9mEk3E0srgFqnQDByqnSsL1FMzui52ddBVbE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ne3IpPppkP39+UETgdd79o9ZiIG2PR9ZmX+EOPkaHa3gkCTR2KAbEHuEAZfXPSgyL8NT3hrfZjqqIz9znkfCGwgLJe75X3qHwKS16+hqfyv6w1kPIyrD3ZVUgNZIPVRzyb/rsrbyhDFtZzCNY2d+e72VB5OBLSVXDN4Qjx/4J+0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=eizUD6ie; arc=none smtp.client-ip=115.124.30.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="eizUD6ie" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718697409; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=jZdafehs1I/lJ1gO3eTqykAyR0TfAHagYOrrfSYWEIs=; b=eizUD6ie3e4QgCj8xZY4bKNYMBQ+l3GUoclRuHA9jdYe8AFuSCGtMUeCq3zvUBHFEYegIi8kcKZmbrrrQl6VboaVEx22zfCYYqj0Esr771f3Asf6y1DLF9pZj4Ew8qMe+8KsaZOxMSjfvC2xLjVt5y9RDDvdVvyvRr5yx29VxcU= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R981e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067110;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0W8jSd4B_1718697408; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W8jSd4B_1718697408) by smtp.aliyun-inc.com; Tue, 18 Jun 2024 15:56:49 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux.dev, bpf@vger.kernel.org Subject: [PATCH net-next v6 05/10] virtio_net: xsk: bind/unbind xsk for rx Date: Tue, 18 Jun 2024 15:56:38 +0800 Message-Id: <20240618075643.24867-6-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> References: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 8baa0af3684b X-Patchwork-Delegate: kuba@kernel.org This patch implement the logic of bind/unbind xsk pool to rq. Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 133 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 133 insertions(+) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index df885cdbe658..d8cce143be26 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -25,6 +25,7 @@ #include #include #include +#include static int napi_weight = NAPI_POLL_WEIGHT; module_param(napi_weight, int, 0444); @@ -348,6 +349,13 @@ struct receive_queue { /* Record the last dma info to free after new pages is allocated. */ struct virtnet_rq_dma *last_dma; + + struct { + struct xsk_buff_pool *pool; + + /* xdp rxq used by xsk */ + struct xdp_rxq_info xdp_rxq; + } xsk; }; /* This structure can contain rss message with maximum settings for indirection table and keysize @@ -4970,6 +4978,129 @@ static int virtnet_restore_guest_offloads(struct virtnet_info *vi) return virtnet_set_guest_offloads(vi, offloads); } +static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct receive_queue *rq, + struct xsk_buff_pool *pool) +{ + int err, qindex; + + qindex = rq - vi->rq; + + if (pool) { + err = xdp_rxq_info_reg(&rq->xsk.xdp_rxq, vi->dev, qindex, rq->napi.napi_id); + if (err < 0) + return err; + + err = xdp_rxq_info_reg_mem_model(&rq->xsk.xdp_rxq, + MEM_TYPE_XSK_BUFF_POOL, NULL); + if (err < 0) + goto unreg; + + xsk_pool_set_rxq_info(pool, &rq->xsk.xdp_rxq); + } + + virtnet_rx_pause(vi, rq); + + err = virtqueue_reset(rq->vq, virtnet_rq_unmap_free_buf); + if (err) { + netdev_err(vi->dev, "reset rx fail: rx queue index: %d err: %d\n", qindex, err); + + pool = NULL; + } + + rq->xsk.pool = pool; + + virtnet_rx_resume(vi, rq); + + if (pool) + return 0; + +unreg: + xdp_rxq_info_unreg(&rq->xsk.xdp_rxq); + return err; +} + +static int virtnet_xsk_pool_enable(struct net_device *dev, + struct xsk_buff_pool *pool, + u16 qid) +{ + struct virtnet_info *vi = netdev_priv(dev); + struct receive_queue *rq; + struct device *dma_dev; + struct send_queue *sq; + int err; + + /* In big_packets mode, xdp cannot work, so there is no need to + * initialize xsk of rq. + */ + if (vi->big_packets && !vi->mergeable_rx_bufs) + return -ENOENT; + + if (qid >= vi->curr_queue_pairs) + return -EINVAL; + + sq = &vi->sq[qid]; + rq = &vi->rq[qid]; + + /* For the xsk, the tx and rx should have the same device. The af-xdp + * may use one buffer to receive from the rx and reuse this buffer to + * send by the tx. So the dma dev of sq and rq should be the same one. + * + * But vq->dma_dev allows every vq has the respective dma dev. So I + * check the dma dev of vq and sq is the same dev. + */ + if (virtqueue_dma_dev(rq->vq) != virtqueue_dma_dev(sq->vq)) + return -EPERM; + + dma_dev = virtqueue_dma_dev(rq->vq); + if (!dma_dev) + return -EPERM; + + err = xsk_pool_dma_map(pool, dma_dev, 0); + if (err) + goto err_xsk_map; + + err = virtnet_rq_bind_xsk_pool(vi, rq, pool); + if (err) + goto err_rq; + + return 0; + +err_rq: + xsk_pool_dma_unmap(pool, 0); +err_xsk_map: + return err; +} + +static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid) +{ + struct virtnet_info *vi = netdev_priv(dev); + struct xsk_buff_pool *pool; + struct receive_queue *rq; + int err; + + if (qid >= vi->curr_queue_pairs) + return -EINVAL; + + rq = &vi->rq[qid]; + + pool = rq->xsk.pool; + + err = virtnet_rq_bind_xsk_pool(vi, rq, NULL); + + xsk_pool_dma_unmap(pool, 0); + + return err; +} + +static int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp) +{ + if (xdp->xsk.pool) + return virtnet_xsk_pool_enable(dev, xdp->xsk.pool, + xdp->xsk.queue_id); + else + return virtnet_xsk_pool_disable(dev, xdp->xsk.queue_id); +} + static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog, struct netlink_ext_ack *extack) { @@ -5095,6 +5226,8 @@ static int virtnet_xdp(struct net_device *dev, struct netdev_bpf *xdp) switch (xdp->command) { case XDP_SETUP_PROG: return virtnet_xdp_set(dev, xdp->prog, xdp->extack); + case XDP_SETUP_XSK_POOL: + return virtnet_xsk_pool_setup(dev, xdp); default: return -EINVAL; } From patchwork Tue Jun 18 07:56:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13701938 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C994619E7DD; Tue, 18 Jun 2024 07:56:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.110 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697420; cv=none; b=U0T0T3IA8Dak7M/z5mzmjxLU+7Ak5Bldrzdw1jJwbBT/vtWuwiClyNPYGYxYlSfaZHcIDC6+eGNzDP6oUtTj3O2IkkVqbpnEFlp75SqayQR1djNfVwXhesSyJuqPqIfa2R+o7mcE+Z4i8j7VNcGdHcBsEzKkueayY45YDUt1+hY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697420; c=relaxed/simple; bh=ujRQgVR1me4J6+GwMqJ69HKjg8vldD6L9KIr0nVFAQg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kWwiCSjuXV8ft0QauCs71WASn15mIE5et8MPNl7AYdGOUW8aERSlBuPrkNKEj9tP5FYZdLw9jK5tRjJhng35vlrDZgFJk7Cob3rrXe1un+ooN4Md9AZAMU7yXxV3wwtUE+gkKmhMvwX9mxoXRaNVzz5TjEhu/uZUrHU+8kXjcKk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=h5Eqxrda; arc=none smtp.client-ip=115.124.30.110 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="h5Eqxrda" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718697410; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=FXlDnaoklNLjA51vt2sGGi+2+tst7TAGL7vCCvlAcsY=; b=h5EqxrdaYocR0iuc04UeNU6Frh7AYl2z39/5WakoluMik5RsbocmoCeJS9B0LDjlgj7iyOeK2K1XwxmSXL7khatMBs/NACTx64f35uiYWD7zILwZFZKSCQBPtDQWc28VkP8pLrf8fXPw0INv+YuHPG8rX1c9zJ9/kccwwHqI2fM= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067110;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0W8jSGBr_1718697409; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W8jSGBr_1718697409) by smtp.aliyun-inc.com; Tue, 18 Jun 2024 15:56:49 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux.dev, bpf@vger.kernel.org Subject: [PATCH net-next v6 06/10] virtio_net: xsk: support wakeup Date: Tue, 18 Jun 2024 15:56:39 +0800 Message-Id: <20240618075643.24867-7-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> References: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 8baa0af3684b X-Patchwork-Delegate: kuba@kernel.org xsk wakeup is used to trigger the logic for xsk xmit by xsk framework or user. Virtio-net does not support to actively generate an interruption, so it tries to trigger tx NAPI on the local cpu. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index d8cce143be26..2bbc715f22c6 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1032,6 +1032,29 @@ static void check_sq_full_and_disable(struct virtnet_info *vi, } } +static int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag) +{ + struct virtnet_info *vi = netdev_priv(dev); + struct send_queue *sq; + + if (!netif_running(dev)) + return -ENETDOWN; + + if (qid >= vi->curr_queue_pairs) + return -EINVAL; + + sq = &vi->sq[qid]; + + if (napi_if_scheduled_mark_missed(&sq->napi)) + return 0; + + local_bh_disable(); + virtqueue_napi_schedule(&sq->napi, sq->vq); + local_bh_enable(); + + return 0; +} + static int __virtnet_xdp_xmit_one(struct virtnet_info *vi, struct send_queue *sq, struct xdp_frame *xdpf) @@ -5312,6 +5335,7 @@ static const struct net_device_ops virtnet_netdev = { .ndo_vlan_rx_kill_vid = virtnet_vlan_rx_kill_vid, .ndo_bpf = virtnet_xdp, .ndo_xdp_xmit = virtnet_xdp_xmit, + .ndo_xsk_wakeup = virtnet_xsk_wakeup, .ndo_features_check = passthru_features_check, .ndo_get_phys_port_name = virtnet_get_phys_port_name, .ndo_set_features = virtnet_set_features, From patchwork Tue Jun 18 07:56:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13701939 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8436D19E823; Tue, 18 Jun 2024 07:56:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.110 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697421; cv=none; b=E1yur0yl7xvo8R9mRWAHMLDaJ+w8TYIiBvkxKxykv1mNZSyIyj8pZpRh5XDDYPOUjJpn4PYb32Jldcqyjsh5gz4J/ULGQxMiKXLyo8TxVERuCUiInjA5dWlGPGEL/qwA08pYr8/jpDJJcc7lGOGUyFcddrmuI5Rn/TbhO9rFNlQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697421; c=relaxed/simple; bh=TyA8CHQDmtVG5eRysWpeVW7NsJCu/g0ZIP0C750o9eA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=mZmiZwRiH4cTqFLnxiSTP2+txSz5kF6GOLZxP6HdNDqsluY9qhWqMVeWmwyOUThdyp8IcjlSkgk8BzcV08QJGe0E5TIXgOVrJfx/PbfBuT/230MyBi5UQj7UBlyL26uMazCg4+xMWTxDbpDPxXDURCTstTdUB+uI2x1I8+8y+sY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=aH8tQgyq; arc=none smtp.client-ip=115.124.30.110 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="aH8tQgyq" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718697412; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=94KgcG7GKlEUjoYMTVEV4xVqh45PoOx57ZKvwpWl2EQ=; b=aH8tQgyqB6DXhVnlaqBvosAdMglWIeS/zvqildaerwV9O6k0m5dC6MI0ThtqmB3ZxbTztkA0TquiS68tOJhzOsDH4Rw0T+K12HsRs9e9ftlI1VUg4rSFuYGANk5fdD7+zoa46uCyArGgH82DYXbljW5CjPDslKFh2GoBKMLUYHw= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033045075189;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0W8jR7pa_1718697410; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W8jR7pa_1718697410) by smtp.aliyun-inc.com; Tue, 18 Jun 2024 15:56:51 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux.dev, bpf@vger.kernel.org Subject: [PATCH net-next v6 07/10] virtio_net: xsk: rx: support fill with xsk buffer Date: Tue, 18 Jun 2024 15:56:40 +0800 Message-Id: <20240618075643.24867-8-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> References: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 8baa0af3684b X-Patchwork-Delegate: kuba@kernel.org Implement the logic of filling rq with XSK buffers. Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 68 ++++++++++++++++++++++++++++++++++++++-- 1 file changed, 66 insertions(+), 2 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 2bbc715f22c6..2ac5668a94ce 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -355,6 +355,8 @@ struct receive_queue { /* xdp rxq used by xsk */ struct xdp_rxq_info xdp_rxq; + + struct xdp_buff **xsk_buffs; } xsk; }; @@ -1032,6 +1034,53 @@ static void check_sq_full_and_disable(struct virtnet_info *vi, } } +static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len) +{ + sg->dma_address = addr; + sg->length = len; +} + +static int virtnet_add_recvbuf_xsk(struct virtnet_info *vi, struct receive_queue *rq, + struct xsk_buff_pool *pool, gfp_t gfp) +{ + struct xdp_buff **xsk_buffs; + dma_addr_t addr; + u32 len, i; + int err = 0; + int num; + + xsk_buffs = rq->xsk.xsk_buffs; + + num = xsk_buff_alloc_batch(pool, xsk_buffs, rq->vq->num_free); + if (!num) + return -ENOMEM; + + len = xsk_pool_get_rx_frame_size(pool) + vi->hdr_len; + + for (i = 0; i < num; ++i) { + /* use the part of XDP_PACKET_HEADROOM as the virtnet hdr space */ + addr = xsk_buff_xdp_get_dma(xsk_buffs[i]) - vi->hdr_len; + + sg_init_table(rq->sg, 1); + sg_fill_dma(rq->sg, addr, len); + + err = virtqueue_add_inbuf(rq->vq, rq->sg, 1, xsk_buffs[i], gfp); + if (err) + goto err; + } + + return num; + +err: + if (i) + err = i; + + for (; i < num; ++i) + xsk_buff_free(xsk_buffs[i]); + + return err; +} + static int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag) { struct virtnet_info *vi = netdev_priv(dev); @@ -2206,6 +2255,11 @@ static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, int err; bool oom; + if (rq->xsk.pool) { + err = virtnet_add_recvbuf_xsk(vi, rq, rq->xsk.pool, gfp); + goto kick; + } + do { if (vi->mergeable_rx_bufs) err = add_recvbuf_mergeable(vi, rq, gfp); @@ -2214,10 +2268,11 @@ static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, else err = add_recvbuf_small(vi, rq, gfp); - oom = err == -ENOMEM; if (err) break; } while (rq->vq->num_free); + +kick: if (virtqueue_kick_prepare(rq->vq) && virtqueue_notify(rq->vq)) { unsigned long flags; @@ -2226,6 +2281,7 @@ static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, u64_stats_update_end_irqrestore(&rq->stats.syncp, flags); } + oom = err == -ENOMEM; return !oom; } @@ -5050,7 +5106,7 @@ static int virtnet_xsk_pool_enable(struct net_device *dev, struct receive_queue *rq; struct device *dma_dev; struct send_queue *sq; - int err; + int err, size; /* In big_packets mode, xdp cannot work, so there is no need to * initialize xsk of rq. @@ -5078,6 +5134,12 @@ static int virtnet_xsk_pool_enable(struct net_device *dev, if (!dma_dev) return -EPERM; + size = virtqueue_get_vring_size(rq->vq); + + rq->xsk.xsk_buffs = kvcalloc(size, sizeof(*rq->xsk.xsk_buffs), GFP_KERNEL); + if (!rq->xsk.xsk_buffs) + return -ENOMEM; + err = xsk_pool_dma_map(pool, dma_dev, 0); if (err) goto err_xsk_map; @@ -5112,6 +5174,8 @@ static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid) xsk_pool_dma_unmap(pool, 0); + kvfree(rq->xsk.xsk_buffs); + return err; } From patchwork Tue Jun 18 07:56:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13701934 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42C2C19DFB8; Tue, 18 Jun 2024 07:56:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.100 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697417; cv=none; b=Da9/hjvBT5hY82u4j2cVFgi+qtKRg4jDmSKk+lN2GmPhGfGrOvBye3qTqB3lVlLAyhU7Z9CV+PGTw7i7wCBCJpEKWRrZacHZ7clWhEv1fERo7Dr48gFwhZi0peeNuXmkHLXhoAjrmwbi9Fu4UdO0IkVa7QoB0N7RmJAzGINRwCI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697417; c=relaxed/simple; bh=waVC2FU3K27ZkBhNh6ZRk5Wta1rOKgPYqdgV623Vo8U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=FzQAePvTZUZemp+wdbsbSShBJBNqFm3gOi3zWiYGDviHVF/NxWh+o6dMXlXzWSEvLY9AX7/Eb2buaBYpLzVkIH1AOWJTdKhb4XolhGC5fapoT0vz9SuoTgXZ14x4tdyP5zFfmy6FNSxAaXNB21flhPsO8EQ6z/Hj8CQZt54rHV0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=WkCPGnJC; arc=none smtp.client-ip=115.124.30.100 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="WkCPGnJC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718697413; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=j+HK2Pga9FC+0Ld0lAckCoZyJD7Mum6zyveF3e2PrLs=; b=WkCPGnJCI61FLvz7qpiw6i7kJuQBYYqc1NtuYfG8G9aVxquN0gkh5IrNTr8sJ7pc2+/HH5bF5rwxCafWF25aOizNPmOQ8nqHMuRiTwKduHRYKLLGWkd9woaNZcAnsDpFqLPN8u7NQYvoJU17NVp6KRVqrgeeM0vPScTyzNuEYUY= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R411e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067110;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0W8jSGCc_1718697411; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W8jSGCc_1718697411) by smtp.aliyun-inc.com; Tue, 18 Jun 2024 15:56:52 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux.dev, bpf@vger.kernel.org Subject: [PATCH net-next v6 08/10] virtio_net: xsk: rx: support recv small mode Date: Tue, 18 Jun 2024 15:56:41 +0800 Message-Id: <20240618075643.24867-9-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> References: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 8baa0af3684b X-Patchwork-Delegate: kuba@kernel.org In the process: 1. We may need to copy data to create skb for XDP_PASS. 2. We may need to call xsk_buff_free() to release the buffer. 3. The handle for xdp_buff is difference from the buffer. If we pushed this logic into existing receive handle(merge and small), we would have to maintain code scattered inside merge and small (and big). So I think it is a good choice for us to put the xsk code into an independent function. Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 135 ++++++++++++++++++++++++++++++++++++++- 1 file changed, 133 insertions(+), 2 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 2ac5668a94ce..06608d696e2e 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -500,6 +500,10 @@ struct virtio_net_common_hdr { }; static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf); +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, + struct net_device *dev, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats); static bool is_xdp_frame(void *ptr) { @@ -1040,6 +1044,120 @@ static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len) sg->length = len; } +static struct xdp_buff *buf_to_xdp(struct virtnet_info *vi, + struct receive_queue *rq, void *buf, u32 len) +{ + struct xdp_buff *xdp; + u32 bufsize; + + xdp = (struct xdp_buff *)buf; + + bufsize = xsk_pool_get_rx_frame_size(rq->xsk.pool) + vi->hdr_len; + + if (unlikely(len > bufsize)) { + pr_debug("%s: rx error: len %u exceeds truesize %u\n", + vi->dev->name, len, bufsize); + DEV_STATS_INC(vi->dev, rx_length_errors); + xsk_buff_free(xdp); + return NULL; + } + + xsk_buff_set_size(xdp, len); + xsk_buff_dma_sync_for_cpu(xdp); + + return xdp; +} + +static struct sk_buff *xdp_construct_skb(struct receive_queue *rq, + struct xdp_buff *xdp) +{ + unsigned int metasize = xdp->data - xdp->data_meta; + struct sk_buff *skb; + unsigned int size; + + size = xdp->data_end - xdp->data_hard_start; + skb = napi_alloc_skb(&rq->napi, size); + if (unlikely(!skb)) { + xsk_buff_free(xdp); + return NULL; + } + + skb_reserve(skb, xdp->data_meta - xdp->data_hard_start); + + size = xdp->data_end - xdp->data_meta; + memcpy(__skb_put(skb, size), xdp->data_meta, size); + + if (metasize) { + __skb_pull(skb, metasize); + skb_metadata_set(skb, metasize); + } + + xsk_buff_free(xdp); + + return skb; +} + +static struct sk_buff *virtnet_receive_xsk_small(struct net_device *dev, struct virtnet_info *vi, + struct receive_queue *rq, struct xdp_buff *xdp, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + struct bpf_prog *prog; + u32 ret; + + ret = XDP_PASS; + rcu_read_lock(); + prog = rcu_dereference(rq->xdp_prog); + if (prog) + ret = virtnet_xdp_handler(prog, xdp, dev, xdp_xmit, stats); + rcu_read_unlock(); + + switch (ret) { + case XDP_PASS: + return xdp_construct_skb(rq, xdp); + + case XDP_TX: + case XDP_REDIRECT: + return NULL; + + default: + /* drop packet */ + xsk_buff_free(xdp); + u64_stats_inc(&stats->drops); + return NULL; + } +} + +static struct sk_buff *virtnet_receive_xsk_buf(struct virtnet_info *vi, struct receive_queue *rq, + void *buf, u32 len, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + struct net_device *dev = vi->dev; + struct sk_buff *skb = NULL; + struct xdp_buff *xdp; + + len -= vi->hdr_len; + + u64_stats_add(&stats->bytes, len); + + xdp = buf_to_xdp(vi, rq, buf, len); + if (!xdp) + return NULL; + + if (unlikely(len < ETH_HLEN)) { + pr_debug("%s: short packet %i\n", dev->name, len); + DEV_STATS_INC(dev, rx_length_errors); + xsk_buff_free(xdp); + return NULL; + } + + if (!vi->mergeable_rx_bufs) + skb = virtnet_receive_xsk_small(dev, vi, rq, xdp, xdp_xmit, stats); + + return skb; +} + static int virtnet_add_recvbuf_xsk(struct virtnet_info *vi, struct receive_queue *rq, struct xsk_buff_pool *pool, gfp_t gfp) { @@ -2363,9 +2481,22 @@ static int virtnet_receive(struct receive_queue *rq, int budget, void *buf; int i; - if (!vi->big_packets || vi->mergeable_rx_bufs) { - void *ctx; + if (rq->xsk.pool) { + struct sk_buff *skb; + + while (packets < budget) { + buf = virtqueue_get_buf(rq->vq, &len); + if (!buf) + break; + skb = virtnet_receive_xsk_buf(vi, rq, buf, len, xdp_xmit, &stats); + if (skb) + virtnet_receive_done(vi, rq, skb); + + packets++; + } + } else if (!vi->big_packets || vi->mergeable_rx_bufs) { + void *ctx; while (packets < budget && (buf = virtnet_rq_get_buf(rq, &len, &ctx))) { receive_buf(vi, rq, buf, len, ctx, xdp_xmit, &stats); From patchwork Tue Jun 18 07:56:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13701935 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C367919DFBA; Tue, 18 Jun 2024 07:56:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697417; cv=none; b=mFaPNE0yvEDM4hd8duLimAuQ7AXSUMMO5DAfA3lJCQwizDFBOZofrmSI4DPPjAQcE1RCPiidQt5RUzAR7rlTRQtlneIDmJdQvc0BIYCH8ieoPvCJulWmUEy0rMCw5TEIJGKOsDJfM/YYUGVz6VYB5RMV47R3Cv9GCeg+9TX5OM0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697417; c=relaxed/simple; bh=FWHKSY/+gape+qlMF1te0S6rQk+1EewFBWF0NTMojJg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Bx5OR7A5Z2L4kQ0B+HJbNQEg8yfC8rBOxxegXJJh+KYyX3+Ln6+XlTNXGyV4r7KKmeQDxI8Mg3pVUpEE/n05puFa+kiBgGUemuxHRPYg/zP97VXfCqlCeCR3gyR2cCj3BxWXbuTH9wVx+E7gBhxf/gWS6BkiLvFtt7n1Oy9/Ck0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=joYPijmT; arc=none smtp.client-ip=115.124.30.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="joYPijmT" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718697413; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=v33T2BReP5MmEwR4SEGzkthhk2RVW62w9993rtHIlik=; b=joYPijmTzBB9mZ+aJRoEuzmJ6LeAZfNnYL4TpiJdfGH6KrmaCN/4PE0JKt+1iFgP0H9CJKf/7wlGIF6EIKUXqTTe6QQzd9LSILBGqj24SVzZaeK/Gc8zbLHe0QoYkq4eil77TC1t+IoivMu6GbztEKAhceYf+sOhq43jKurWAyk= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033022160150;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0W8jSGCu_1718697412; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W8jSGCu_1718697412) by smtp.aliyun-inc.com; Tue, 18 Jun 2024 15:56:53 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux.dev, bpf@vger.kernel.org Subject: [PATCH net-next v6 09/10] virtio_net: xsk: rx: support recv merge mode Date: Tue, 18 Jun 2024 15:56:42 +0800 Message-Id: <20240618075643.24867-10-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> References: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 8baa0af3684b X-Patchwork-Delegate: kuba@kernel.org Support AF-XDP for merge mode. Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 139 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 139 insertions(+) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 06608d696e2e..cfa106aa8039 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -504,6 +504,10 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, struct net_device *dev, unsigned int *xdp_xmit, struct virtnet_rq_stats *stats); +static struct sk_buff *virtnet_skb_append_frag(struct sk_buff *head_skb, + struct sk_buff *curr_skb, + struct page *page, void *buf, + int len, int truesize); static bool is_xdp_frame(void *ptr) { @@ -1128,6 +1132,139 @@ static struct sk_buff *virtnet_receive_xsk_small(struct net_device *dev, struct } } +static void xsk_drop_follow_bufs(struct net_device *dev, + struct receive_queue *rq, + u32 num_buf, + struct virtnet_rq_stats *stats) +{ + struct xdp_buff *xdp; + u32 len; + + while (num_buf-- > 1) { + xdp = virtqueue_get_buf(rq->vq, &len); + if (unlikely(!xdp)) { + pr_debug("%s: rx error: %d buffers missing\n", + dev->name, num_buf); + DEV_STATS_INC(dev, rx_length_errors); + break; + } + u64_stats_add(&stats->bytes, len); + xsk_buff_free(xdp); + } +} + +static int xsk_append_merge_buffer(struct virtnet_info *vi, + struct receive_queue *rq, + struct sk_buff *head_skb, + u32 num_buf, + struct virtio_net_hdr_mrg_rxbuf *hdr, + struct virtnet_rq_stats *stats) +{ + struct sk_buff *curr_skb; + struct xdp_buff *xdp; + u32 len, truesize; + struct page *page; + void *buf; + + curr_skb = head_skb; + + while (--num_buf) { + buf = virtqueue_get_buf(rq->vq, &len); + if (unlikely(!buf)) { + pr_debug("%s: rx error: %d buffers out of %d missing\n", + vi->dev->name, num_buf, + virtio16_to_cpu(vi->vdev, + hdr->num_buffers)); + DEV_STATS_INC(vi->dev, rx_length_errors); + return -EINVAL; + } + + u64_stats_add(&stats->bytes, len); + + xdp = buf_to_xdp(vi, rq, buf, len); + if (!xdp) + goto err; + + buf = napi_alloc_frag(len); + if (!buf) { + xsk_buff_free(xdp); + goto err; + } + + memcpy(buf, xdp->data - vi->hdr_len, len); + + xsk_buff_free(xdp); + + page = virt_to_page(buf); + + truesize = len; + + curr_skb = virtnet_skb_append_frag(head_skb, curr_skb, page, + buf, len, truesize); + if (!curr_skb) { + put_page(page); + goto err; + } + } + + return 0; + +err: + xsk_drop_follow_bufs(vi->dev, rq, num_buf, stats); + return -EINVAL; +} + +static struct sk_buff *virtnet_receive_xsk_merge(struct net_device *dev, struct virtnet_info *vi, + struct receive_queue *rq, struct xdp_buff *xdp, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + struct virtio_net_hdr_mrg_rxbuf *hdr; + struct bpf_prog *prog; + struct sk_buff *skb; + u32 ret, num_buf; + + hdr = xdp->data - vi->hdr_len; + num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers); + + ret = XDP_PASS; + rcu_read_lock(); + prog = rcu_dereference(rq->xdp_prog); + /* TODO: support multi buffer. */ + if (prog && num_buf == 1) + ret = virtnet_xdp_handler(prog, xdp, dev, xdp_xmit, stats); + rcu_read_unlock(); + + switch (ret) { + case XDP_PASS: + skb = xdp_construct_skb(rq, xdp); + if (!skb) + goto drop_bufs; + + if (xsk_append_merge_buffer(vi, rq, skb, num_buf, hdr, stats)) { + dev_kfree_skb(skb); + goto drop; + } + + return skb; + + case XDP_TX: + case XDP_REDIRECT: + return NULL; + + default: + /* drop packet */ + xsk_buff_free(xdp); + } + +drop_bufs: + xsk_drop_follow_bufs(dev, rq, num_buf, stats); + +drop: + u64_stats_inc(&stats->drops); + return NULL; +} + static struct sk_buff *virtnet_receive_xsk_buf(struct virtnet_info *vi, struct receive_queue *rq, void *buf, u32 len, unsigned int *xdp_xmit, @@ -1154,6 +1291,8 @@ static struct sk_buff *virtnet_receive_xsk_buf(struct virtnet_info *vi, struct r if (!vi->mergeable_rx_bufs) skb = virtnet_receive_xsk_small(dev, vi, rq, xdp, xdp_xmit, stats); + else + skb = virtnet_receive_xsk_merge(dev, vi, rq, xdp, xdp_xmit, stats); return skb; } From patchwork Tue Jun 18 07:56:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13701937 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1A69719E7C5; Tue, 18 Jun 2024 07:56:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.100 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697419; cv=none; b=OHTet8uMWTTJz7005AGaE2CpYLSDeMQEu/RGhhuagwtLR04PFBPfzJgda1iozXnud02H1U/V61pkipXR26oFMYUjjgmcdtz8QKgfa9yeaI8AHmHjKciQSs9UZqLLIQya3m2La5xyexPhtLUanCBPUrV5bufaObqw+UpkiU+L4nU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718697419; c=relaxed/simple; bh=CIedzif6anRpRJNYMgvhFXRgdFqD4R2ceRZwrFkGgxo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UElFD9z6GDSgcU0G4L+sIB7ZHUMu87DN2j/vBLgtOdz8gN9Wiwb8SDkKPWhocN3XnT0LZCL6weRe9v7C+GlT2mmgNfNOl5Zb054yIwlXOBNecyH3DtfRzg9A58Ps7y6p7vYYfs+UAYoz6yKTc61hGcW+7sv+yIV31WZ5Ci/h3Ks= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=pAx/GtTN; arc=none smtp.client-ip=115.124.30.100 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="pAx/GtTN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718697415; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=DcosxQGMQEHUFy3QfeHPEhVjZwNhFC25zEd8+sxMlFA=; b=pAx/GtTNCP/FFvrtJHMz9WD8ILrCi895etNYX8HAE4/PwoCZu0FaSwMeWYti5scrJcBpPc//rMXb6KB17guKW/cCgbuQqx4zl2DM5fiLlYw0CSAWMZc/JZN6Gf91W0I08786zM4vgdDUVju8TVzj0P3rP1hmjo3J21YNvHeL55g= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R811e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067112;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0W8jQnKJ_1718697413; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W8jQnKJ_1718697413) by smtp.aliyun-inc.com; Tue, 18 Jun 2024 15:56:54 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux.dev, bpf@vger.kernel.org Subject: [PATCH net-next v6 10/10] virtio_net: xsk: rx: free the unused xsk buffer Date: Tue, 18 Jun 2024 15:56:43 +0800 Message-Id: <20240618075643.24867-11-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> References: <20240618075643.24867-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 8baa0af3684b X-Patchwork-Delegate: kuba@kernel.org Release the xsk buffer, when the queue is releasing or the queue is resizing. Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index cfa106aa8039..33695b86bd99 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -967,6 +967,11 @@ static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf) rq = &vi->rq[i]; + if (rq->xsk.pool) { + xsk_buff_free((struct xdp_buff *)buf); + return; + } + if (!vi->big_packets || vi->mergeable_rx_bufs) virtnet_rq_unmap(rq, buf, 0);