summaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorVlastimil Babka <vbabka@suse.cz>2013-09-11 14:22:35 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2013-09-11 15:58:01 -0700
commit7a8010cd36273ff5f6fea5201ef9232f30cebbd9 (patch)
tree3805f3d9a8a1f1c1c555ef31bc1bdb51fb51e33e /include
parent5b40998ae35cf64561868370e6c9f3d3e94b6bf7 (diff)
downloadlinux-stable-7a8010cd36273ff5f6fea5201ef9232f30cebbd9.tar.gz
linux-stable-7a8010cd36273ff5f6fea5201ef9232f30cebbd9.tar.bz2
linux-stable-7a8010cd36273ff5f6fea5201ef9232f30cebbd9.zip
mm: munlock: manual pte walk in fast path instead of follow_page_mask()
Currently munlock_vma_pages_range() calls follow_page_mask() to obtain each individual struct page. This entails repeated full page table translations and page table lock taken for each page separately. This patch avoids the costly follow_page_mask() where possible, by iterating over ptes within single pmd under single page table lock. The first pte is obtained by get_locked_pte() for non-THP page acquired by the initial follow_page_mask(). The rest of the on-stack pagevec for munlock is filled up using pte_walk as long as pte_present() and vm_normal_page() are sufficient to obtain the struct page. After this patch, a 14% speedup was measured for munlocking a 56GB large memory area with THP disabled. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Cc: Jörn Engel <joern@logfs.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Michel Lespinasse <walken@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/mm.h12
1 files changed, 6 insertions, 6 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h
index dce24569f8fc..03f84b8d7359 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -643,12 +643,12 @@ static inline enum zone_type page_zonenum(const struct page *page)
#endif
/*
- * The identification function is only used by the buddy allocator for
- * determining if two pages could be buddies. We are not really
- * identifying a zone since we could be using a the section number
- * id if we have not node id available in page flags.
- * We guarantee only that it will return the same value for two
- * combinable pages in a zone.
+ * The identification function is mainly used by the buddy allocator for
+ * determining if two pages could be buddies. We are not really identifying
+ * the zone since we could be using the section number id if we do not have
+ * node id available in page flags.
+ * We only guarantee that it will return the same value for two combinable
+ * pages in a zone.
*/
static inline int page_zone_id(struct page *page)
{