summaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
authorChristoph Hellwig <hch@lst.de>2019-07-25 17:56:47 -0700
committerJason Gunthorpe <jgg@mellanox.com>2019-07-26 11:10:53 -0300
commitd45d464b118f428229d91769c8a3cc1e2e0bb4d5 (patch)
tree2cc3f6a9435790fb3038c840e77dea66ffb45323 /Documentation
parent9a4903e49e495bfd2650862dfae4178bebe4db9c (diff)
downloadlinux-stable-d45d464b118f428229d91769c8a3cc1e2e0bb4d5.tar.gz
linux-stable-d45d464b118f428229d91769c8a3cc1e2e0bb4d5.tar.bz2
linux-stable-d45d464b118f428229d91769c8a3cc1e2e0bb4d5.zip
mm/hmm: merge hmm_range_snapshot into hmm_range_fault
Add a HMM_FAULT_SNAPSHOT flag so that hmm_range_snapshot can be merged into the almost identical hmm_range_fault function. Link: https://lore.kernel.org/r/20190726005650.2566-5-rcampbell@nvidia.com Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/vm/hmm.rst17
1 files changed, 8 insertions, 9 deletions
diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst
index 710ce1c701bf..ddcb5ca8b296 100644
--- a/Documentation/vm/hmm.rst
+++ b/Documentation/vm/hmm.rst
@@ -192,15 +192,14 @@ read only, or fully unmap, etc.). The device must complete the update before
the driver callback returns.
When the device driver wants to populate a range of virtual addresses, it can
-use either::
+use::
- long hmm_range_snapshot(struct hmm_range *range);
- long hmm_range_fault(struct hmm_range *range, bool block);
+ long hmm_range_fault(struct hmm_range *range, unsigned int flags);
-The first one (hmm_range_snapshot()) will only fetch present CPU page table
+With the HMM_RANGE_SNAPSHOT flag, it will only fetch present CPU page table
entries and will not trigger a page fault on missing or non-present entries.
-The second one does trigger a page fault on missing or read-only entries if
-write access is requested (see below). Page faults use the generic mm page
+Without that flag, it does trigger a page fault on missing or read-only entries
+if write access is requested (see below). Page faults use the generic mm page
fault code path just like a CPU page fault.
Both functions copy CPU page table entries into their pfns array argument. Each
@@ -227,20 +226,20 @@ The usage pattern is::
/*
* Just wait for range to be valid, safe to ignore return value as we
- * will use the return value of hmm_range_snapshot() below under the
+ * will use the return value of hmm_range_fault() below under the
* mmap_sem to ascertain the validity of the range.
*/
hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC);
again:
down_read(&mm->mmap_sem);
- ret = hmm_range_snapshot(&range);
+ ret = hmm_range_fault(&range, HMM_RANGE_SNAPSHOT);
if (ret) {
up_read(&mm->mmap_sem);
if (ret == -EBUSY) {
/*
* No need to check hmm_range_wait_until_valid() return value
- * on retry we will get proper error with hmm_range_snapshot()
+ * on retry we will get proper error with hmm_range_fault()
*/
hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC);
goto again;