diff options
author | Eric Dumazet <edumazet@google.com> | 2012-04-22 23:34:26 +0000 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2012-04-23 22:28:28 -0400 |
commit | f545a38f74584cc7424cb74f792a00c6d2589485 (patch) | |
tree | b272cbfed3267a7750f55f23989e1b070ae6ac3e /include/net/sock.h | |
parent | b98985073bc5403ef1320866e4ef8bbc5d587ceb (diff) | |
download | linux-f545a38f74584cc7424cb74f792a00c6d2589485.tar.gz linux-f545a38f74584cc7424cb74f792a00c6d2589485.tar.bz2 linux-f545a38f74584cc7424cb74f792a00c6d2589485.zip |
net: add a limit parameter to sk_add_backlog()
sk_add_backlog() & sk_rcvqueues_full() hard coded sk_rcvbuf as the
memory limit. We need to make this limit a parameter for TCP use.
No functional change expected in this patch, all callers still using the
old sk_rcvbuf limit.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Maciej Żenczykowski <maze@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Cc: Rick Jones <rick.jones2@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/net/sock.h')
-rw-r--r-- | include/net/sock.h | 10 |
1 files changed, 6 insertions, 4 deletions
diff --git a/include/net/sock.h b/include/net/sock.h index 4cdb9b3050f4..4e9d01e491d5 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -709,17 +709,19 @@ static inline void __sk_add_backlog(struct sock *sk, struct sk_buff *skb) * Do not take into account this skb truesize, * to allow even a single big packet to come. */ -static inline bool sk_rcvqueues_full(const struct sock *sk, const struct sk_buff *skb) +static inline bool sk_rcvqueues_full(const struct sock *sk, const struct sk_buff *skb, + unsigned int limit) { unsigned int qsize = sk->sk_backlog.len + atomic_read(&sk->sk_rmem_alloc); - return qsize > sk->sk_rcvbuf; + return qsize > limit; } /* The per-socket spinlock must be held here. */ -static inline __must_check int sk_add_backlog(struct sock *sk, struct sk_buff *skb) +static inline __must_check int sk_add_backlog(struct sock *sk, struct sk_buff *skb, + unsigned int limit) { - if (sk_rcvqueues_full(sk, skb)) + if (sk_rcvqueues_full(sk, skb, limit)) return -ENOBUFS; __sk_add_backlog(sk, skb); |