diff options
author | Santosh Shilimkar <santosh.shilimkar@oracle.com> | 2014-02-11 19:34:25 -0800 |
---|---|---|
committer | Santosh Shilimkar <ssantosh@kernel.org> | 2015-09-30 12:43:25 -0400 |
commit | 9b9acde7e887e057568cd077d9c3377d2cb9aa5b (patch) | |
tree | 004e320c1619df5ff09cd3746cca00f12e78585d /net/rds/rds.h | |
parent | 28126959882d3ec4745f2ec800f3a1d74368b2fe (diff) | |
download | linux-9b9acde7e887e057568cd077d9c3377d2cb9aa5b.tar.gz linux-9b9acde7e887e057568cd077d9c3377d2cb9aa5b.tar.bz2 linux-9b9acde7e887e057568cd077d9c3377d2cb9aa5b.zip |
RDS: Use per-bucket rw lock for bind hash-table
One global lock protecting hash-tables with 1024 buckets isn't
efficient and it shows up in a massive systems with truck
loads of RDS sockets serving multiple databases. The
perf data clearly highlights the contention on the rw
lock in these massive workloads.
When the contention gets worse, the code gets into a state where
it decides to back off on the lock. So while it has disabled interrupts,
it sits and backs off on this lock get. This causes the system to
become sluggish and eventually all sorts of bad things happen.
The simple fix is to move the lock into the hash bucket and
use per-bucket lock to improve the scalability.
Signed-off-by: Santosh Shilimkar <ssantosh@kernel.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Diffstat (limited to 'net/rds/rds.h')
-rw-r--r-- | net/rds/rds.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/net/rds/rds.h b/net/rds/rds.h index afb4048d0cfd..121fb81aab8b 100644 --- a/net/rds/rds.h +++ b/net/rds/rds.h @@ -603,6 +603,7 @@ extern wait_queue_head_t rds_poll_waitq; int rds_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len); void rds_remove_bound(struct rds_sock *rs); struct rds_sock *rds_find_bound(__be32 addr, __be16 port); +void rds_bind_lock_init(void); /* cong.c */ int rds_cong_get_maps(struct rds_connection *conn); |