diff options
author | Eric Dumazet <edumazet@google.com> | 2017-11-02 18:10:03 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2017-11-03 16:02:56 +0900 |
commit | f67971e683e81d7ba4739728511ae6e52a1b6321 (patch) | |
tree | c960635bc9e1813044d72d838df112acc04fc9b2 /net/ipv4/tcp_output.c | |
parent | c509a8229d8df29c8308c4b03a1f6d69eb287acd (diff) | |
download | linux-f67971e683e81d7ba4739728511ae6e52a1b6321.tar.gz linux-f67971e683e81d7ba4739728511ae6e52a1b6321.tar.bz2 linux-f67971e683e81d7ba4739728511ae6e52a1b6321.zip |
tcp: tcp_fragment() should not assume rtx skbs
While stress testing MTU probing, we had crashes in list_del() that we root-caused
to the fact that tcp_fragment() is unconditionally inserting the freshly allocated
skb into tsorted_sent_queue list.
But this list is supposed to contain skbs that were sent.
This was mostly harmless until MTU probing was enabled.
Fortunately we can use the tcp_queue enum added later (but in same linux version)
for rtx-rb-tree to fix the bug.
Fixes: e2080072ed2d ("tcp: new list for sent but unacked skbs for RACK recovery")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Soheil Hassas Yeganeh <soheil@google.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Priyaranjan Jha <priyarjha@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4/tcp_output.c')
-rw-r--r-- | net/ipv4/tcp_output.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 06a0c89ffe40..822962ece284 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1395,7 +1395,8 @@ int tcp_fragment(struct sock *sk, enum tcp_queue tcp_queue, /* Link BUFF into the send queue. */ __skb_header_release(buff); tcp_insert_write_queue_after(skb, buff, sk, tcp_queue); - list_add(&buff->tcp_tsorted_anchor, &skb->tcp_tsorted_anchor); + if (tcp_queue == TCP_FRAG_IN_RTX_QUEUE) + list_add(&buff->tcp_tsorted_anchor, &skb->tcp_tsorted_anchor); return 0; } |