diff options
author | Jon Paul Maloy <jon.maloy@ericsson.com> | 2016-05-02 11:58:45 -0400 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2017-04-30 05:49:27 +0200 |
commit | 76ca3053f32c997472c325176c235a25170fc98b (patch) | |
tree | 341c486992c8106aff07f94676448009e6046df1 /net/tipc | |
parent | 3f31559043087b9cd45582c2eb12d7900cedc4ed (diff) | |
download | linux-stable-76ca3053f32c997472c325176c235a25170fc98b.tar.gz linux-stable-76ca3053f32c997472c325176c235a25170fc98b.tar.bz2 linux-stable-76ca3053f32c997472c325176c235a25170fc98b.zip |
tipc: re-enable compensation for socket receive buffer double counting
commit 7c8bcfb1255fe9d929c227d67bdcd84430fd200b upstream.
In the refactoring commit d570d86497ee ("tipc: enqueue arrived buffers
in socket in separate function") we did by accident replace the test
if (sk->sk_backlog.len == 0)
atomic_set(&tsk->dupl_rcvcnt, 0);
with
if (sk->sk_backlog.len)
atomic_set(&tsk->dupl_rcvcnt, 0);
This effectively disables the compensation we have for the double
receive buffer accounting that occurs temporarily when buffers are
moved from the backlog to the socket receive queue. Until now, this
has gone unnoticed because of the large receive buffer limits we are
applying, but becomes indispensable when we reduce this buffer limit
later in this series.
We now fix this by inverting the mentioned condition.
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'net/tipc')
-rw-r--r-- | net/tipc/socket.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/net/tipc/socket.c b/net/tipc/socket.c index b26b7a127773..d119291db852 100644 --- a/net/tipc/socket.c +++ b/net/tipc/socket.c @@ -1755,7 +1755,7 @@ static void tipc_sk_enqueue(struct sk_buff_head *inputq, struct sock *sk, /* Try backlog, compensating for double-counted bytes */ dcnt = &tipc_sk(sk)->dupl_rcvcnt; - if (sk->sk_backlog.len) + if (!sk->sk_backlog.len) atomic_set(dcnt, 0); lim = rcvbuf_limit(sk, skb) + atomic_read(dcnt); if (likely(!sk_add_backlog(sk, skb, lim))) |