summaryrefslogtreecommitdiffstats
path: root/include/trace
diff options
context:
space:
mode:
authorChuck Lever <chuck.lever@oracle.com>2019-06-19 10:32:48 -0400
committerAnna Schumaker <Anna.Schumaker@Netapp.com>2019-07-09 10:30:16 -0400
commit05eb06d86685e7d9dac60e6bbb46d7f4c30b056e (patch)
tree8f04bce3b88e16afd6912e44681b80752721c171 /include/trace
parent1310051c720a83c5717658bcbff710b260f2bff9 (diff)
downloadlinux-05eb06d86685e7d9dac60e6bbb46d7f4c30b056e.tar.gz
linux-05eb06d86685e7d9dac60e6bbb46d7f4c30b056e.tar.bz2
linux-05eb06d86685e7d9dac60e6bbb46d7f4c30b056e.zip
xprtrdma: Fix occasional transport deadlock
Under high I/O workloads, I've noticed that an RPC/RDMA transport occasionally deadlocks (IOPS goes to zero, and doesn't recover). Diagnosis shows that the sendctx queue is empty, but when sendctxs are returned to the queue, the xprt_write_space wake-up never occurs. The wake-up logic in rpcrdma_sendctx_put_locked is racy. I noticed that both EMPTY_SCQ and XPRT_WRITE_SPACE are implemented via an atomic bit. Just one of those is sufficient. Removing EMPTY_SCQ in favor of the generic bit mechanism makes the deadlock un-reproducible. Without EMPTY_SCQ, rpcrdma_buffer::rb_flags is no longer used and is therefore removed. Unfortunately this patch does not apply cleanly to stable. If needed, someone will have to port it and test it. Fixes: 2fad659209d5 ("xprtrdma: Wait on empty sendctx queue") Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Diffstat (limited to 'include/trace')
-rw-r--r--include/trace/events/rpcrdma.h27
1 files changed, 27 insertions, 0 deletions
diff --git a/include/trace/events/rpcrdma.h b/include/trace/events/rpcrdma.h
index 59492a93fe1d..2fb415136f7b 100644
--- a/include/trace/events/rpcrdma.h
+++ b/include/trace/events/rpcrdma.h
@@ -539,6 +539,33 @@ TRACE_EVENT(xprtrdma_marshal_failed,
)
);
+TRACE_EVENT(xprtrdma_prepsend_failed,
+ TP_PROTO(const struct rpc_rqst *rqst,
+ int ret
+ ),
+
+ TP_ARGS(rqst, ret),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, task_id)
+ __field(unsigned int, client_id)
+ __field(u32, xid)
+ __field(int, ret)
+ ),
+
+ TP_fast_assign(
+ __entry->task_id = rqst->rq_task->tk_pid;
+ __entry->client_id = rqst->rq_task->tk_client->cl_clid;
+ __entry->xid = be32_to_cpu(rqst->rq_xid);
+ __entry->ret = ret;
+ ),
+
+ TP_printk("task:%u@%u xid=0x%08x: ret=%d",
+ __entry->task_id, __entry->client_id, __entry->xid,
+ __entry->ret
+ )
+);
+
TRACE_EVENT(xprtrdma_post_send,
TP_PROTO(
const struct rpcrdma_req *req,