diff options
author | Jussi Maki <joamaki@gmail.com> | 2021-06-15 08:54:15 +0000 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2021-06-15 11:26:15 -0700 |
commit | 848ca9182a7d25bb54955c3aab9a3a2742bf9678 (patch) | |
tree | 15449c43a407368fbc404c5d79c5e4189840226e /include/net/bonding.h | |
parent | b8f6b0522c298ae9267bd6584e19b942a0636910 (diff) | |
download | linux-848ca9182a7d25bb54955c3aab9a3a2742bf9678.tar.gz linux-848ca9182a7d25bb54955c3aab9a3a2742bf9678.tar.bz2 linux-848ca9182a7d25bb54955c3aab9a3a2742bf9678.zip |
net: bonding: Use per-cpu rr_tx_counter
The round-robin rr_tx_counter was shared across CPUs leading to
significant cache thrashing at high packet rates. This patch switches
the round-robin packet counter to use a per-cpu variable to decide
the destination slave.
On a test with 2x100Gbit ICE nic with pktgen_sample_04_many_flows.sh
(-s 64 -t 32) the tx rate was 19.6Mpps before and 22.3Mpps after
this patch.
"perf top -e cache_misses" before:
12.31% [bonding] [k] bond_xmit_roundrobin_slave_get
10.59% [sch_fq_codel] [k] fq_codel_dequeue
9.34% [kernel] [k] skb_release_data
after:
15.42% [sch_fq_codel] [k] fq_codel_dequeue
10.06% [kernel] [k] __memset
9.12% [kernel] [k] skb_release_data
Signed-off-by: Jussi Maki <joamaki@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/net/bonding.h')
-rw-r--r-- | include/net/bonding.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/include/net/bonding.h b/include/net/bonding.h index 019e998d944a..15335732e166 100644 --- a/include/net/bonding.h +++ b/include/net/bonding.h @@ -232,7 +232,7 @@ struct bonding { char proc_file_name[IFNAMSIZ]; #endif /* CONFIG_PROC_FS */ struct list_head bond_list; - u32 rr_tx_counter; + u32 __percpu *rr_tx_counter; struct ad_bond_info ad_info; struct alb_bond_info alb_info; struct bond_params params; |