diff options
author | Eric Dumazet <edumazet@google.com> | 2015-05-15 12:39:29 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2015-05-17 22:45:49 -0400 |
commit | 76dfa6082032b5c179864816fa508879421678eb (patch) | |
tree | e710efd16263290f153b9f565449bdef7a48f464 /net | |
parent | 8e4d980ac21596a9b91d8e720c77ad081975a0a8 (diff) | |
download | linux-76dfa6082032b5c179864816fa508879421678eb.tar.gz linux-76dfa6082032b5c179864816fa508879421678eb.tar.bz2 linux-76dfa6082032b5c179864816fa508879421678eb.zip |
tcp: allow one skb to be received per socket under memory pressure
While testing tight tcp_mem settings, I found tcp sessions could be
stuck because we do not allow even one skb to be received on them.
By allowing one skb to be received, we introduce fairness and
eventuallu force memory hogs to release their allocation.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
-rw-r--r-- | net/ipv4/tcp_input.c | 10 |
1 files changed, 6 insertions, 4 deletions
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 093779f7e893..40c435997e54 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -4507,10 +4507,12 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb) if (eaten <= 0) { queue_and_out: - if (eaten < 0 && - tcp_try_rmem_schedule(sk, skb, skb->truesize)) - goto drop; - + if (eaten < 0) { + if (skb_queue_len(&sk->sk_receive_queue) == 0) + sk_forced_mem_schedule(sk, skb->truesize); + else if (tcp_try_rmem_schedule(sk, skb, skb->truesize)) + goto drop; + } eaten = tcp_queue_rcv(sk, skb, 0, &fragstolen); } tcp_rcv_nxt_update(tp, TCP_SKB_CB(skb)->end_seq); |