diff options
author | Yishai Hadas <yishaih@nvidia.com> | 2020-09-30 19:38:25 +0300 |
---|---|---|
committer | Jason Gunthorpe <jgg@nvidia.com> | 2020-10-01 16:39:54 -0300 |
commit | 36f30e486dce22345c2dd3a3ba439c12cd67f6ba (patch) | |
tree | ced2407f4fa17dbd933eb1f1b5c128531a09cda2 /include/rdma | |
parent | 2ee9bf346fbfd1dad0933b9eb3a4c2c0979b633e (diff) | |
download | linux-36f30e486dce22345c2dd3a3ba439c12cd67f6ba.tar.gz linux-36f30e486dce22345c2dd3a3ba439c12cd67f6ba.tar.bz2 linux-36f30e486dce22345c2dd3a3ba439c12cd67f6ba.zip |
IB/core: Improve ODP to use hmm_range_fault()
Move to use hmm_range_fault() instead of get_user_pags_remote() to improve
performance in a few aspects:
This includes:
- Dropping the need to allocate and free memory to hold its output
- No need any more to use put_page() to unpin the pages
- The logic to detect contiguous pages is done based on the returned
order, no need to run per page and evaluate.
In addition, moving to use hmm_range_fault() enables to reduce page faults
in the system with it's snapshot mode, this will be introduced in next
patches from this series.
As part of this, cleanup some flows and use the required data structures
to work with hmm_range_fault().
Link: https://lore.kernel.org/r/20200930163828.1336747-2-leon@kernel.org
Signed-off-by: Yishai Hadas <yishaih@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Diffstat (limited to 'include/rdma')
-rw-r--r-- | include/rdma/ib_umem_odp.h | 21 |
1 files changed, 8 insertions, 13 deletions
diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index d16d2c17e733..a53b62ac8a9d 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -14,17 +14,13 @@ struct ib_umem_odp { struct mmu_interval_notifier notifier; struct pid *tgid; + /* An array of the pfns included in the on-demand paging umem. */ + unsigned long *pfn_list; + /* - * An array of the pages included in the on-demand paging umem. - * Indices of pages that are currently not mapped into the device will - * contain NULL. - */ - struct page **page_list; - /* - * An array of the same size as page_list, with DMA addresses mapped - * for pages the pages in page_list. The lower two bits designate - * access permissions. See ODP_READ_ALLOWED_BIT and - * ODP_WRITE_ALLOWED_BIT. + * An array with DMA addresses mapped for pfns in pfn_list. + * The lower two bits designate access permissions. + * See ODP_READ_ALLOWED_BIT and ODP_WRITE_ALLOWED_BIT. */ dma_addr_t *dma_list; /* @@ -97,9 +93,8 @@ ib_umem_odp_alloc_child(struct ib_umem_odp *root_umem, unsigned long addr, const struct mmu_interval_notifier_ops *ops); void ib_umem_odp_release(struct ib_umem_odp *umem_odp); -int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 start_offset, - u64 bcnt, u64 access_mask, - unsigned long current_seq); +int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 start_offset, + u64 bcnt, u64 access_mask); void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 start_offset, u64 bound); |