diff options
author | Ying Xue <ying.xue@windriver.com> | 2014-04-21 10:55:48 +0800 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2014-04-22 21:17:53 -0400 |
commit | 7216cd949c9bd56a4ccd952c624ab68f8c9aa0a4 (patch) | |
tree | 9c4f86a2a04713294bbea8bbf9b19462f307f05b /net/tipc/name_distr.c | |
parent | 2231c5af451e4b7ae3cc56eaa4653af6ede51109 (diff) | |
download | linux-7216cd949c9bd56a4ccd952c624ab68f8c9aa0a4.tar.gz linux-7216cd949c9bd56a4ccd952c624ab68f8c9aa0a4.tar.bz2 linux-7216cd949c9bd56a4ccd952c624ab68f8c9aa0a4.zip |
tipc: purge tipc_net_lock lock
Now tipc routing hierarchy comprises the structures 'node', 'link'and
'bearer'. The whole hierarchy is protected by a big read/write lock,
tipc_net_lock, to ensure that nothing is added or removed while code
is accessing any of these structures. Obviously the locking policy
makes node, link and bearer components closely bound together so that
their relationship becomes unnecessarily complex. In the worst case,
such locking policy not only has a negative influence on performance,
but also it's prone to lead to deadlock occasionally.
In order o decouple the complex relationship between bearer and node
as well as link, the locking policy is adjusted as follows:
- Bearer level
RTNL lock is used on update side, and RCU is used on read side.
Meanwhile, all bearer instances including broadcast bearer are
saved into bearer_list array.
- Node and link level
All node instances are saved into two tipc_node_list and node_htable
lists. The two lists are protected by node_list_lock on write side,
and they are guarded with RCU lock on read side. All members in node
structure including link instances are protected by node spin lock.
- The relationship between bearer and node
When link accesses bearer, it first needs to find the bearer with
its bearer identity from the bearer_list array. When bearer accesses
node, it can iterate the node_htable hash list with the node
address to find the corresponding node.
In the new locking policy, every component has its private locking
solution and the relationship between bearer and node is very simple,
that is, they can find each other with node address or bearer identity
from node_htable hash list or bearer_list array.
Until now above all changes have been done, so tipc_net_lock can be
removed safely.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Tested-by: Erik Hugne <erik.hugne@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/tipc/name_distr.c')
-rw-r--r-- | net/tipc/name_distr.c | 2 |
1 files changed, 0 insertions, 2 deletions
diff --git a/net/tipc/name_distr.c b/net/tipc/name_distr.c index aff8041dc157..36a72822601c 100644 --- a/net/tipc/name_distr.c +++ b/net/tipc/name_distr.c @@ -248,7 +248,6 @@ void tipc_named_node_up(unsigned long nodearg) u32 max_item_buf = 0; /* compute maximum amount of publication data to send per message */ - read_lock_bh(&tipc_net_lock); n_ptr = tipc_node_find(node); if (n_ptr) { tipc_node_lock(n_ptr); @@ -258,7 +257,6 @@ void tipc_named_node_up(unsigned long nodearg) ITEM_SIZE) * ITEM_SIZE; tipc_node_unlock(n_ptr); } - read_unlock_bh(&tipc_net_lock); if (!max_item_buf) return; |