diff options
author | Alexander Lobakin <alobakin@pm.me> | 2020-11-11 20:45:25 +0000 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2020-11-12 09:55:43 -0800 |
commit | 4b1a86281cc1d0de46df3ad2cb8c1f86ac07681c (patch) | |
tree | 224c27eb3a0ced79f988c9ef7644f4b8cdb9d60c /net | |
parent | 8a5c2906c52f4a81939b4f8536e0004a4193a154 (diff) | |
download | linux-4b1a86281cc1d0de46df3ad2cb8c1f86ac07681c.tar.gz linux-4b1a86281cc1d0de46df3ad2cb8c1f86ac07681c.tar.bz2 linux-4b1a86281cc1d0de46df3ad2cb8c1f86ac07681c.zip |
net: udp: fix UDP header access on Fast/frag0 UDP GRO
UDP GRO uses udp_hdr(skb) in its .gro_receive() callback. While it's
probably OK for non-frag0 paths (when all headers or even the entire
frame are already in skb head), this inline points to junk when
using Fast GRO (napi_gro_frags() or napi_gro_receive() with only
Ethernet header in skb head and all the rest in the frags) and breaks
GRO packet compilation and the packet flow itself.
To support both modes, skb_gro_header_fast() + skb_gro_header_slow()
are typically used. UDP even has an inline helper that makes use of
them, udp_gro_udphdr(). Use that instead of troublemaking udp_hdr()
to get rid of the out-of-order delivers.
Present since the introduction of plain UDP GRO in 5.0-rc1.
Fixes: e20cf8d3f1f7 ("udp: implement GRO for plain UDP sockets.")
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Alexander Lobakin <alobakin@pm.me>
Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net')
-rw-r--r-- | net/ipv4/udp_offload.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c index e67a66fbf27b..13740e9fe6ec 100644 --- a/net/ipv4/udp_offload.c +++ b/net/ipv4/udp_offload.c @@ -366,7 +366,7 @@ out: static struct sk_buff *udp_gro_receive_segment(struct list_head *head, struct sk_buff *skb) { - struct udphdr *uh = udp_hdr(skb); + struct udphdr *uh = udp_gro_udphdr(skb); struct sk_buff *pp = NULL; struct udphdr *uh2; struct sk_buff *p; |