diff options
author | Kirill A. Shutemov <kirill.shutemov@linux.intel.com> | 2015-04-14 15:44:39 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2015-04-14 16:49:00 -0700 |
commit | fc05f566210fa57f8e68ead8762b8dbb3f1c61e3 (patch) | |
tree | 429463d285b012a54d213e7d7eb8810e55e16b36 /mm/mlock.c | |
parent | 84d33df279e0380995b0e03fb8aad04cef2bc29f (diff) | |
download | linux-fc05f566210fa57f8e68ead8762b8dbb3f1c61e3.tar.gz linux-fc05f566210fa57f8e68ead8762b8dbb3f1c61e3.tar.bz2 linux-fc05f566210fa57f8e68ead8762b8dbb3f1c61e3.zip |
mm: rename __mlock_vma_pages_range() to populate_vma_page_range()
__mlock_vma_pages_range() doesn't necessarily mlock pages. It depends on
vma flags. The same codepath is used for MAP_POPULATE.
Let's rename __mlock_vma_pages_range() to populate_vma_page_range().
This patch also drops mlock_vma_pages_range() references from
documentation. It has gone in cea10a19b797 ("mm: directly use
__mlock_vma_pages_range() in find_extend_vma()").
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/mlock.c')
-rw-r--r-- | mm/mlock.c | 12 |
1 files changed, 6 insertions, 6 deletions
diff --git a/mm/mlock.c b/mm/mlock.c index f756e28b33fc..9d0f3cd716c5 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -206,13 +206,13 @@ out: } /** - * __mlock_vma_pages_range() - mlock a range of pages in the vma. + * populate_vma_page_range() - populate a range of pages in the vma. * @vma: target vma * @start: start address * @end: end address * @nonblocking: * - * This takes care of making the pages present too. + * This takes care of mlocking the pages too if VM_LOCKED is set. * * return 0 on success, negative error code on error. * @@ -224,7 +224,7 @@ out: * If @nonblocking is non-NULL, it must held for read only and may be * released. If it's released, *@nonblocking will be set to 0. */ -long __mlock_vma_pages_range(struct vm_area_struct *vma, +long populate_vma_page_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, int *nonblocking) { struct mm_struct *mm = vma->vm_mm; @@ -596,7 +596,7 @@ success: /* * vm_flags is protected by the mmap_sem held in write mode. * It's okay if try_to_unmap_one unmaps a page just after we - * set VM_LOCKED, __mlock_vma_pages_range will bring it back. + * set VM_LOCKED, populate_vma_page_range will bring it back. */ if (lock) @@ -702,11 +702,11 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors) if (nstart < vma->vm_start) nstart = vma->vm_start; /* - * Now fault in a range of pages. __mlock_vma_pages_range() + * Now fault in a range of pages. populate_vma_page_range() * double checks the vma flags, so that it won't mlock pages * if the vma was already munlocked. */ - ret = __mlock_vma_pages_range(vma, nstart, nend, &locked); + ret = populate_vma_page_range(vma, nstart, nend, &locked); if (ret < 0) { if (ignore_errors) { ret = 0; |