From af8aab71370a692eaf7e7969ba5b1a455ac20113 Mon Sep 17 00:00:00 2001 From: Sebastian Sanchez Date: Wed, 2 May 2018 06:43:39 -0700 Subject: IB/hfi1: Optimize kthread pointer locking when queuing CQ entries All threads queuing CQ entries on different CQs are unnecessarily synchronized by a spin lock to check if the CQ kthread worker hasn't been destroyed before queuing an CQ entry. The lock used in 6efaf10f163d ("IB/rdmavt: Avoid queuing work into a destroyed cq kthread worker") is a device global lock and will have poor performance at scale as completions are entered from a large number of CPUs. Convert to use RCU where the read side of RCU is rvt_cq_enter() to determine that the worker is alive prior to triggering the completion event. Apply write side RCU semantics in rvt_driver_cq_init() and rvt_cq_exit(). Fixes: 6efaf10f163d ("IB/rdmavt: Avoid queuing work into a destroyed cq kthread worker") Cc: # 4.14.x Reviewed-by: Mike Marciniszyn Signed-off-by: Sebastian Sanchez Signed-off-by: Dennis Dalessandro Signed-off-by: Doug Ledford --- include/rdma/rdma_vt.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'include/rdma/rdma_vt.h') diff --git a/include/rdma/rdma_vt.h b/include/rdma/rdma_vt.h index 3f4c187e435d..eec495e68823 100644 --- a/include/rdma/rdma_vt.h +++ b/include/rdma/rdma_vt.h @@ -402,7 +402,7 @@ struct rvt_dev_info { spinlock_t pending_lock; /* protect pending mmap list */ /* CQ */ - struct kthread_worker *worker; /* per device cq worker */ + struct kthread_worker __rcu *worker; /* per device cq worker */ u32 n_cqs_allocated; /* number of CQs allocated for device */ spinlock_t n_cqs_lock; /* protect count of in use cqs */ -- cgit v1.2.3