summaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2019-10-18 15:20:05 -0700
committerDavid S. Miller <davem@davemloft.net>2019-10-19 12:21:53 -0700
commit2a06b8982f8f2f40d03a3daf634676386bd84dbc (patch)
tree76d330882a9159b334f734201e405121383b1be4 /Documentation
parent50c7d2ba9de20f60a2d527ad6928209ef67e4cdd (diff)
downloadlinux-stable-2a06b8982f8f2f40d03a3daf634676386bd84dbc.tar.gz
linux-stable-2a06b8982f8f2f40d03a3daf634676386bd84dbc.tar.bz2
linux-stable-2a06b8982f8f2f40d03a3daf634676386bd84dbc.zip
net: reorder 'struct net' fields to avoid false sharing
Intel test robot reported a ~7% regression on TCP_CRR tests that they bisected to the cited commit. Indeed, every time a new TCP socket is created or deleted, the atomic counter net->count is touched (via get_net(net) and put_net(net) calls) So cpus might have to reload a contended cache line in net_hash_mix(net) calls. We need to reorder 'struct net' fields to move @hash_mix in a read mostly cache line. We move in the first cache line fields that can be dirtied often. We probably will have to address in a followup patch the __randomize_layout that was added in linux-4.13, since this might break our placement choices. Fixes: 355b98553789 ("netns: provide pure entropy for net_hash_mix()") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: kernel test robot <oliver.sang@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'Documentation')
0 files changed, 0 insertions, 0 deletions