summaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2019-07-19 11:52:33 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2019-07-28 08:27:16 +0200
commit18254123b79522e185495ac14c7a33636e8371d8 (patch)
tree71931e565a7874060bec7baf8e335489240eb47a /include
parent83b7f9a73bed9af76e9248c335318d45dfedf9fb (diff)
downloadlinux-stable-18254123b79522e185495ac14c7a33636e8371d8.tar.gz
linux-stable-18254123b79522e185495ac14c7a33636e8371d8.tar.bz2
linux-stable-18254123b79522e185495ac14c7a33636e8371d8.zip
tcp: be more careful in tcp_fragment()
[ Upstream commit b617158dc096709d8600c53b6052144d12b89fab ] Some applications set tiny SO_SNDBUF values and expect TCP to just work. Recent patches to address CVE-2019-11478 broke them in case of losses, since retransmits might be prevented. We should allow these flows to make progress. This patch allows the first and last skb in retransmit queue to be split even if memory limits are hit. It also adds the some room due to the fact that tcp_sendmsg() and tcp_sendpage() might overshoot sk_wmem_queued by about one full TSO skb (64KB size). Note this allowance was already present in stable backports for kernels < 4.15 Note for < 4.15 backports : tcp_rtx_queue_tail() will probably look like : static inline struct sk_buff *tcp_rtx_queue_tail(const struct sock *sk) { struct sk_buff *skb = tcp_send_head(sk); return skb ? tcp_write_queue_prev(sk, skb) : tcp_write_queue_tail(sk); } Fixes: f070ef2ac667 ("tcp: tcp_fragment() should apply sane memory limits") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Andrew Prout <aprout@ll.mit.edu> Tested-by: Andrew Prout <aprout@ll.mit.edu> Tested-by: Jonathan Lemon <jonathan.lemon@gmail.com> Tested-by: Michal Kubecek <mkubecek@suse.cz> Acked-by: Neal Cardwell <ncardwell@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Acked-by: Christoph Paasch <cpaasch@apple.com> Cc: Jonathan Looney <jtl@netflix.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'include')
-rw-r--r--include/net/tcp.h5
1 files changed, 5 insertions, 0 deletions
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 582c0caa9811..dd8e472362e3 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1675,6 +1675,11 @@ static inline struct sk_buff *tcp_rtx_queue_head(const struct sock *sk)
return skb_rb_first(&sk->tcp_rtx_queue);
}
+static inline struct sk_buff *tcp_rtx_queue_tail(const struct sock *sk)
+{
+ return skb_rb_last(&sk->tcp_rtx_queue);
+}
+
static inline struct sk_buff *tcp_write_queue_head(const struct sock *sk)
{
return skb_peek(&sk->sk_write_queue);