summaryrefslogtreecommitdiffstats
path: root/include/trace/events
diff options
context:
space:
mode:
authorJeff Layton <jlayton@primarydata.com>2014-11-21 14:19:30 -0500
committerJ. Bruce Fields <bfields@redhat.com>2014-12-09 11:22:22 -0500
commitb1691bc03d4eddb959234409167bef9be9e62d74 (patch)
tree2ccfca41ffcffc69a3bacc0bb1e9e5754595a0e6 /include/trace/events
parent403c7b44441d60aba7f8a134c31279ffa60ea769 (diff)
downloadlinux-b1691bc03d4eddb959234409167bef9be9e62d74.tar.gz
linux-b1691bc03d4eddb959234409167bef9be9e62d74.tar.bz2
linux-b1691bc03d4eddb959234409167bef9be9e62d74.zip
sunrpc: convert to lockless lookup of queued server threads
Testing has shown that the pool->sp_lock can be a bottleneck on a busy server. Every time data is received on a socket, the server must take that lock in order to dequeue a thread from the sp_threads list. Address this problem by eliminating the sp_threads list (which contains threads that are currently idle) and replacing it with a RQ_BUSY flag in svc_rqst. This allows us to walk the sp_all_threads list under the rcu_read_lock and find a suitable thread for the xprt by doing a test_and_set_bit. Note that we do still have a potential atomicity problem however with this approach. We don't want svc_xprt_do_enqueue to set the rqst->rq_xprt pointer unless a test_and_set_bit of RQ_BUSY returned zero (which indicates that the thread was idle). But, by the time we check that, the bit could be flipped by a waking thread. To address this, we acquire a new per-rqst spinlock (rq_lock) and take that before doing the test_and_set_bit. If that returns false, then we can set rq_xprt and drop the spinlock. Then, when the thread wakes up, it must set the bit under the same spinlock and can trust that if it was already set then the rq_xprt is also properly set. With this scheme, the case where we have an idle thread no longer needs to take the highly contended pool->sp_lock at all, and that removes the bottleneck. That still leaves one issue: What of the case where we walk the whole sp_all_threads list and don't find an idle thread? Because the search is lockess, it's possible for the queueing to race with a thread that is going to sleep. To address that, we queue the xprt and then search again. If we find an idle thread at that point, we can't attach the xprt to it directly since that might race with a different thread waking up and finding it. All we can do is wake the idle thread back up and let it attempt to find the now-queued xprt. Signed-off-by: Jeff Layton <jlayton@primarydata.com> Tested-by: Chris Worley <chris.worley@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Diffstat (limited to 'include/trace/events')
-rw-r--r--include/trace/events/sunrpc.h3
1 files changed, 2 insertions, 1 deletions
diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
index 08a5fed50f34..ee4438a63a48 100644
--- a/include/trace/events/sunrpc.h
+++ b/include/trace/events/sunrpc.h
@@ -419,7 +419,8 @@ TRACE_EVENT(xs_tcp_data_recv,
{ (1UL << RQ_USEDEFERRAL), "RQ_USEDEFERRAL"}, \
{ (1UL << RQ_DROPME), "RQ_DROPME"}, \
{ (1UL << RQ_SPLICE_OK), "RQ_SPLICE_OK"}, \
- { (1UL << RQ_VICTIM), "RQ_VICTIM"})
+ { (1UL << RQ_VICTIM), "RQ_VICTIM"}, \
+ { (1UL << RQ_BUSY), "RQ_BUSY"})
TRACE_EVENT(svc_recv,
TP_PROTO(struct svc_rqst *rqst, int status),