diff options
author | Jason Gunthorpe <jgg@nvidia.com> | 2020-08-18 15:05:18 +0300 |
---|---|---|
committer | Jason Gunthorpe <jgg@nvidia.com> | 2020-08-27 08:38:15 -0300 |
commit | d114c6feedfe0600c19b9f9479a4026354d1f7fd (patch) | |
tree | 14e0ad035061b8d22bf950bb0de6a7977fee2fd0 /drivers/infiniband/core/cma.c | |
parent | 95fe51096b7adf1d1e7315c49c75e2f75f162584 (diff) | |
download | linux-stable-d114c6feedfe0600c19b9f9479a4026354d1f7fd.tar.gz linux-stable-d114c6feedfe0600c19b9f9479a4026354d1f7fd.tar.bz2 linux-stable-d114c6feedfe0600c19b9f9479a4026354d1f7fd.zip |
RDMA/cma: Add missing locking to rdma_accept()
In almost all cases rdma_accept() is called under the handler_mutex by
ULPs from their handler callbacks. The one exception was ucma which did
not get the handler_mutex.
To improve the understand-ability of the locking scheme obtain the mutex
for ucma as well.
This improves how ucma works by allowing it to directly use handler_mutex
for some of its internal locking against the handler callbacks intead of
the global file->mut lock.
There does not seem to be a serious bug here, other than a DISCONNECT event
can be delivered concurrently with accept succeeding.
Link: https://lore.kernel.org/r/20200818120526.702120-7-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Diffstat (limited to 'drivers/infiniband/core/cma.c')
-rw-r--r-- | drivers/infiniband/core/cma.c | 25 |
1 files changed, 22 insertions, 3 deletions
diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c index 26de0dab60bb..78641858abe2 100644 --- a/drivers/infiniband/core/cma.c +++ b/drivers/infiniband/core/cma.c @@ -4154,14 +4154,15 @@ static int cma_send_sidr_rep(struct rdma_id_private *id_priv, int __rdma_accept(struct rdma_cm_id *id, struct rdma_conn_param *conn_param, const char *caller) { - struct rdma_id_private *id_priv; + struct rdma_id_private *id_priv = + container_of(id, struct rdma_id_private, id); int ret; - id_priv = container_of(id, struct rdma_id_private, id); + lockdep_assert_held(&id_priv->handler_mutex); rdma_restrack_set_task(&id_priv->res, caller); - if (!cma_comp(id_priv, RDMA_CM_CONNECT)) + if (READ_ONCE(id_priv->state) != RDMA_CM_CONNECT) return -EINVAL; if (!id->qp && conn_param) { @@ -4214,6 +4215,24 @@ int __rdma_accept_ece(struct rdma_cm_id *id, struct rdma_conn_param *conn_param, } EXPORT_SYMBOL(__rdma_accept_ece); +void rdma_lock_handler(struct rdma_cm_id *id) +{ + struct rdma_id_private *id_priv = + container_of(id, struct rdma_id_private, id); + + mutex_lock(&id_priv->handler_mutex); +} +EXPORT_SYMBOL(rdma_lock_handler); + +void rdma_unlock_handler(struct rdma_cm_id *id) +{ + struct rdma_id_private *id_priv = + container_of(id, struct rdma_id_private, id); + + mutex_unlock(&id_priv->handler_mutex); +} +EXPORT_SYMBOL(rdma_unlock_handler); + int rdma_notify(struct rdma_cm_id *id, enum ib_event_type event) { struct rdma_id_private *id_priv; |