summaryrefslogtreecommitdiffstats
path: root/net/ipv6
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2018-03-31 13:16:25 -0700
committerDavid S. Miller <davem@davemloft.net>2018-04-01 14:08:21 -0400
commit694aba690de062cf27b28a5e56e7a5a7185b0a1c (patch)
tree1fbdc27ecff23dc1babcf50d1fd9884d84d800ee /net/ipv6
parentc07255020551cadae8bf903f2c5e1fcbad731bac (diff)
downloadlinux-694aba690de062cf27b28a5e56e7a5a7185b0a1c.tar.gz
linux-694aba690de062cf27b28a5e56e7a5a7185b0a1c.tar.bz2
linux-694aba690de062cf27b28a5e56e7a5a7185b0a1c.zip
ipv4: factorize sk_wmem_alloc updates done by __ip_append_data()
While testing my inet defrag changes, I found that the senders could spend ~20% of cpu cycles in skb_set_owner_w() updating sk->sk_wmem_alloc for every fragment they cook. The solution to this problem is to use alloc_skb() instead of sock_wmalloc() and manually perform a single sk_wmem_alloc change. Similar change for IPv6 is provided in following patch. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv6')
0 files changed, 0 insertions, 0 deletions