Message ID | 20201016073527.5087-1-ceggers@arri.de (mailing list archive) |
---|---|
State | Not Applicable |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net] net: dsa: ksz: don't pad a cloned sk_buff | expand |
On Fri, Oct 16, 2020 at 09:35:27AM +0200, Christian Eggers wrote: > If the supplied sk_buff is cloned (e.g. in dsa_skb_tx_timestamp()), > __skb_put_padto() will allocate a new sk_buff with size = skb->len + > padlen. So the condition just tested for (skb_tailroom(skb) >= padlen + > len) is not fulfilled anymore. Although the real size will usually be > larger than skb->len + padlen (due to alignment), there is no guarantee > that the required memory for the tail tag will be available > > Instead of letting __skb_put_padto allocate a new (too small) sk_buff, > lets take the already existing path and allocate a new sk_buff ourself > (with sufficient size). Hi Christian What is not clear to me is why not change the __skb_put_padto() call to pass the correct length? Andrew
On Fri, Oct 16, 2020 at 04:00:36PM +0200, Andrew Lunn wrote: > On Fri, Oct 16, 2020 at 09:35:27AM +0200, Christian Eggers wrote: > > If the supplied sk_buff is cloned (e.g. in dsa_skb_tx_timestamp()), > > __skb_put_padto() will allocate a new sk_buff with size = skb->len + > > padlen. So the condition just tested for (skb_tailroom(skb) >= padlen + > > len) is not fulfilled anymore. Although the real size will usually be > > larger than skb->len + padlen (due to alignment), there is no guarantee > > that the required memory for the tail tag will be available > > > > Instead of letting __skb_put_padto allocate a new (too small) sk_buff, > > lets take the already existing path and allocate a new sk_buff ourself > > (with sufficient size). > > Hi Christian > > What is not clear to me is why not change the __skb_put_padto() call > to pass the correct length? There is a second call to skb_put that increases the skb->len further from the tailroom area. See Christian's other patch. I would treat this patch as "premature" until we fully understand what's going on there.
diff --git a/net/dsa/tag_ksz.c b/net/dsa/tag_ksz.c index 945a9bd5ba35..cb1f27e15201 100644 --- a/net/dsa/tag_ksz.c +++ b/net/dsa/tag_ksz.c @@ -22,7 +22,7 @@ static struct sk_buff *ksz_common_xmit(struct sk_buff *skb, padlen = (skb->len >= ETH_ZLEN) ? 0 : ETH_ZLEN - skb->len; - if (skb_tailroom(skb) >= padlen + len) { + if (skb_tailroom(skb) >= padlen + len && !skb_cloned(skb)) { /* Let dsa_slave_xmit() free skb */ if (__skb_put_padto(skb, skb->len + padlen, false)) return NULL; @@ -45,7 +45,7 @@ static struct sk_buff *ksz_common_xmit(struct sk_buff *skb, /* Let skb_put_padto() free nskb, and let dsa_slave_xmit() free * skb */ - if (skb_put_padto(nskb, nskb->len + padlen)) + if (skb_put_padto(nskb, ETH_ZLEN + len)) return NULL; consume_skb(skb);
If the supplied sk_buff is cloned (e.g. in dsa_skb_tx_timestamp()), __skb_put_padto() will allocate a new sk_buff with size = skb->len + padlen. So the condition just tested for (skb_tailroom(skb) >= padlen + len) is not fulfilled anymore. Although the real size will usually be larger than skb->len + padlen (due to alignment), there is no guarantee that the required memory for the tail tag will be available Instead of letting __skb_put_padto allocate a new (too small) sk_buff, lets take the already existing path and allocate a new sk_buff ourself (with sufficient size). Fixes: 8b8010fb7876 ("dsa: add support for Microchip KSZ tail tagging") Signed-off-by: Christian Eggers <ceggers@arri.de> --- I am not sure whether this is a problem for current kernels (it depends whether cloned sk_buffs can happen on any paths). But when adding time stamping (will be submitted soon), this will become an issue. This patch supersedes "net: dsa: ksz: fix padding size of skb" from yesterday. net/dsa/tag_ksz.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)