summaryrefslogtreecommitdiffstats
path: root/mm/memcontrol.c
diff options
context:
space:
mode:
authorJames Morse <james.morse@arm.com>2016-10-07 17:00:12 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2016-10-07 18:46:28 -0700
commit0247f3f4d78a475cd3181dc9fc162fdef773aaaa (patch)
tree2e7c179bbed1aa445b8305071dcbf44ce7d299a3 /mm/memcontrol.c
parent6fcb52a56ff60d240f06296b12827e7f20d45f63 (diff)
downloadlinux-stable-0247f3f4d78a475cd3181dc9fc162fdef773aaaa.tar.gz
linux-stable-0247f3f4d78a475cd3181dc9fc162fdef773aaaa.tar.bz2
linux-stable-0247f3f4d78a475cd3181dc9fc162fdef773aaaa.zip
mm/memcontrol.c: make the walk_page_range() limit obvious
mem_cgroup_count_precharge() and mem_cgroup_move_charge() both call walk_page_range() on the range 0 to ~0UL, neither provide a pte_hole callback, which causes the current implementation to skip non-vma regions. This is all fine but follow up changes would like to make walk_page_range more generic so it is better to be explicit about which range to traverse so let's use highest_vm_end to explicitly traverse only user mmaped memory. [mhocko@kernel.org: rewrote changelog] Link: http://lkml.kernel.org/r/1472655897-22532-1-git-send-email-james.morse@arm.com Signed-off-by: James Morse <james.morse@arm.com> Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Vladimir Davydov <vdavydov@virtuozzo.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memcontrol.c')
-rw-r--r--mm/memcontrol.c6
1 files changed, 4 insertions, 2 deletions
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 5579e762b1ce..0739d4129a93 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4681,7 +4681,8 @@ static unsigned long mem_cgroup_count_precharge(struct mm_struct *mm)
.mm = mm,
};
down_read(&mm->mmap_sem);
- walk_page_range(0, ~0UL, &mem_cgroup_count_precharge_walk);
+ walk_page_range(0, mm->highest_vm_end,
+ &mem_cgroup_count_precharge_walk);
up_read(&mm->mmap_sem);
precharge = mc.precharge;
@@ -4969,7 +4970,8 @@ retry:
* When we have consumed all precharges and failed in doing
* additional charge, the page walk just aborts.
*/
- walk_page_range(0, ~0UL, &mem_cgroup_move_charge_walk);
+ walk_page_range(0, mc.mm->highest_vm_end, &mem_cgroup_move_charge_walk);
+
up_read(&mc.mm->mmap_sem);
atomic_dec(&mc.from->moving_account);
}