From patchwork Mon Dec 4 15:01:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13478656 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9F21428E38 for ; Mon, 4 Dec 2023 15:02:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="adHKSs53" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AFD40C433C8; Mon, 4 Dec 2023 15:02:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701702122; bh=2hXL1swgJPfbfY+P/f66gxPBpIw8NLnTbHKbznWtPVE=; h=From:To:Cc:Subject:Date:From; b=adHKSs53icJxAvzFZUBUZOjeXKoKpLcRQKg4/GmdpaHA/sHTsuBiFI3QTLbII8d4P Uk+dl247pOvT8tVcN5E2yB68+Gn51qSbkWlzAIhkodAuPnEMNUzrFiU5GLJVgbXMNv YClu/RAL43NmTPMYYHGSw5J4Zk1wzcmLSD8sd7OmJc0cvHkcFi9jXwdy/T+Skc7Ryb kMEacADXr6Sb3SPWaeZ1wdwvbvEWvSTFqk4BKvMxXkknt3DgyeZfhEop7e9BEpFJAc sLq6IkuoeC4PyWgG2V77W1PyBsj4DIyu/ZA9m0OkT5Pqf/9buDuUwWgOMFrEvA65uJ O67yVGDHswm+g== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, lorenzo.bianconi@redhat.com, linyunsheng@huawei.com, alexander.duyck@gmail.com, aleksander.lobakin@intel.com, liangchen.linux@gmail.com Subject: [PATCH net] net: veth: fix packet segmentation in veth_convert_skb_to_xdp_buff Date: Mon, 4 Dec 2023 16:01:48 +0100 Message-ID: X-Mailer: git-send-email 2.43.0 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Based on the previous allocated packet, page_offset can be not null in veth_convert_skb_to_xdp_buff routine. Take into account page fragment offset during the skb paged area copy in veth_convert_skb_to_xdp_buff(). Fixes: 2d0de67da51a ("net: veth: use newly added page pool API for veth with xdp") Signed-off-by: Lorenzo Bianconi Reviewed-by: Yunsheng Lin --- drivers/net/veth.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 57efb3454c57..977861c46b1f 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -790,7 +790,8 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq, skb_add_rx_frag(nskb, i, page, page_offset, size, truesize); - if (skb_copy_bits(skb, off, page_address(page), + if (skb_copy_bits(skb, off, + page_address(page) + page_offset, size)) { consume_skb(nskb); goto drop;