summaryrefslogtreecommitdiffstats
path: root/net
diff options
context:
space:
mode:
authorYuchung Cheng <ycheng@google.com>2017-05-31 11:21:27 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2017-06-14 15:05:52 +0200
commit3ee35b96825e063defe6eb333ffd3124aab5061b (patch)
tree6d719bb1f3b958fe89c1aa5dbef9d66e8c48f7a7 /net
parent61c92d5a533c884d93b129cd4ad412a9128db9b7 (diff)
downloadlinux-stable-3ee35b96825e063defe6eb333ffd3124aab5061b.tar.gz
linux-stable-3ee35b96825e063defe6eb333ffd3124aab5061b.tar.bz2
linux-stable-3ee35b96825e063defe6eb333ffd3124aab5061b.zip
tcp: disallow cwnd undo when switching congestion control
[ Upstream commit 44abafc4cc094214a99f860f778c48ecb23422fc ] When the sender switches its congestion control during loss recovery, if the recovery is spurious then it may incorrectly revert cwnd and ssthresh to the older values set by a previous congestion control. Consider a congestion control (like BBR) that does not use ssthresh and keeps it infinite: the connection may incorrectly revert cwnd to an infinite value when switching from BBR to another congestion control. This patch fixes it by disallowing such cwnd undo operation upon switching congestion control. Note that undo_marker is not reset s.t. the packets that were incorrectly marked lost would be corrected. We only avoid undoing the cwnd in tcp_undo_cwnd_reduction(). Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'net')
-rw-r--r--net/ipv4/tcp_cong.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
index baea5df43598..0cdbea9b9288 100644
--- a/net/ipv4/tcp_cong.c
+++ b/net/ipv4/tcp_cong.c
@@ -179,6 +179,7 @@ void tcp_init_congestion_control(struct sock *sk)
{
const struct inet_connection_sock *icsk = inet_csk(sk);
+ tcp_sk(sk)->prior_ssthresh = 0;
if (icsk->icsk_ca_ops->init)
icsk->icsk_ca_ops->init(sk);
if (tcp_ca_needs_ecn(sk))