summaryrefslogtreecommitdiffstats
path: root/drivers/infiniband
diff options
context:
space:
mode:
authorHaggai Eran <haggaie@mellanox.com>2015-01-06 13:56:02 +0200
committerRoland Dreier <roland@purestorage.com>2015-02-17 22:14:56 -0800
commit4fc701ead77ede96df3e8b3de13fdf2b1326ee5b (patch)
treed2a4ba21782bad0263a96ef738a1187319c8a36c /drivers/infiniband
parent7eae20db6adf721d0c14ad2a37208278ce4f11dc (diff)
downloadlinux-4fc701ead77ede96df3e8b3de13fdf2b1326ee5b.tar.gz
linux-4fc701ead77ede96df3e8b3de13fdf2b1326ee5b.tar.bz2
linux-4fc701ead77ede96df3e8b3de13fdf2b1326ee5b.zip
IB/core: Properly handle registration of on-demand paging MRs after dereg
When the last on-demand paging MR is released the notifier count is left non-zero so that concurrent page faults will have to abort. If a new MR is then registered, the counter is reset. However, the decision is made to put the new MR in the list waiting for the notifier count to reach zero, before the counter is reset. An invalidation or another MR registration can release the MR to handle page faults, but without such an event the MR can wait forever. The patch fixes this issue by adding a check whether the MR is the first on-demand paging MR when deciding whether it is ready to handle page faults. If it is the first MR, we know that there are no mmu notifiers running in parallel to the registration. Fixes: 882214e2b128 ("IB/core: Implement support for MMU notifiers regarding on demand paging regions") Signed-off-by: Haggai Eran <haggaie@mellanox.com> Signed-off-by: Shachar Raindel <raindel@mellanox.com> Signed-off-by: Roland Dreier <roland@purestorage.com>
Diffstat (limited to 'drivers/infiniband')
-rw-r--r--drivers/infiniband/core/umem_odp.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
index 6095872549e7..8b8cc6fa0ab0 100644
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -294,7 +294,8 @@ int ib_umem_odp_get(struct ib_ucontext *context, struct ib_umem *umem)
if (likely(ib_umem_start(umem) != ib_umem_end(umem)))
rbt_ib_umem_insert(&umem->odp_data->interval_tree,
&context->umem_tree);
- if (likely(!atomic_read(&context->notifier_count)))
+ if (likely(!atomic_read(&context->notifier_count)) ||
+ context->odp_mrs_count == 1)
umem->odp_data->mn_counters_active = true;
else
list_add(&umem->odp_data->no_private_counters,