summaryrefslogtreecommitdiffstats
path: root/drivers/net/bonding/bond_alb.c
diff options
context:
space:
mode:
authornikolay@redhat.com <nikolay@redhat.com>2013-08-01 16:54:51 +0200
committerDavid S. Miller <davem@davemloft.net>2013-08-01 16:42:02 -0700
commit278b20837511776dc9d5f6ee1c7fabd5479838bb (patch)
tree6be201bb67d28154444b1860d6c1247d02adab23 /drivers/net/bonding/bond_alb.c
parent15077228cab68e5e8c3cbf26a7f6ebacfac4c829 (diff)
downloadlinux-278b20837511776dc9d5f6ee1c7fabd5479838bb.tar.gz
linux-278b20837511776dc9d5f6ee1c7fabd5479838bb.tar.bz2
linux-278b20837511776dc9d5f6ee1c7fabd5479838bb.zip
bonding: initial RCU conversion
This patch does the initial bonding conversion to RCU. After it the following modes are protected by RCU alone: roundrobin, active-backup, broadcast and xor. Modes ALB/TLB and 3ad still acquire bond->lock for reading, and will be dealt with later. curr_active_slave needs to be dereferenced via rcu in the converted modes because the only thing protecting the slave after this patch is rcu_read_lock, so we need the proper barrier for weakly ordered archs and to make sure we don't have stale pointer. It's not tagged with __rcu yet because there's still work to be done to remove the curr_slave_lock, so sparse will complain when rcu_assign_pointer and rcu_dereference are used, but the alternative to use rcu_dereference_protected would've created much bigger code churn which is more difficult to test and review. That will be converted in time. 1. Active-backup mode 1.1 Perf recording while doing iperf -P 4 - old bonding: iperf spent 0.55% in bonding, system spent 0.29% CPU in bonding - new bonding: iperf spent 0.29% in bonding, system spent 0.15% CPU in bonding 1.2. Bandwidth measurements - old bonding: 16.1 gbps consistently - new bonding: 17.5 gbps consistently 2. Round-robin mode 2.1 Perf recording while doing iperf -P 4 - old bonding: iperf spent 0.51% in bonding, system spent 0.24% CPU in bonding - new bonding: iperf spent 0.16% in bonding, system spent 0.11% CPU in bonding 2.2 Bandwidth measurements - old bonding: 8 gbps (variable due to packet reorderings) - new bonding: 10 gbps (variable due to packet reorderings) Of course the latency has improved in all converted modes, and moreover while doing enslave/release (since it doesn't affect tx anymore). Also I've stress tested all modes doing enslave/release in a loop while transmitting traffic. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'drivers/net/bonding/bond_alb.c')
-rw-r--r--drivers/net/bonding/bond_alb.c6
1 files changed, 4 insertions, 2 deletions
diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
index 4d35196b4b5a..3a5db7b1df68 100644
--- a/drivers/net/bonding/bond_alb.c
+++ b/drivers/net/bonding/bond_alb.c
@@ -1337,6 +1337,7 @@ int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev)
/* make sure that the curr_active_slave do not change during tx
*/
+ read_lock(&bond->lock);
read_lock(&bond->curr_slave_lock);
switch (ntohs(skb->protocol)) {
@@ -1441,11 +1442,12 @@ int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev)
}
read_unlock(&bond->curr_slave_lock);
-
+ read_unlock(&bond->lock);
if (res) {
/* no suitable interface, frame not sent */
kfree_skb(skb);
}
+
return NETDEV_TX_OK;
}
@@ -1663,7 +1665,7 @@ void bond_alb_handle_active_change(struct bonding *bond, struct slave *new_slave
}
swap_slave = bond->curr_active_slave;
- bond->curr_active_slave = new_slave;
+ rcu_assign_pointer(bond->curr_active_slave, new_slave);
if (!new_slave || list_empty(&bond->slave_list))
return;