diff options
author | Ralph Campbell <rcampbell@nvidia.com> | 2020-10-13 16:53:13 -0700 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2020-10-29 09:55:15 +0100 |
commit | 2761fff65fbf59cd77346fc63b76f651cf0f97d2 (patch) | |
tree | 6cd060728435009ef9e37c0c8869b3b62e0f5647 /mm | |
parent | d74d61d90babd63b2fc953fbf0ce5f4fe2d47cf0 (diff) | |
download | linux-stable-2761fff65fbf59cd77346fc63b76f651cf0f97d2.tar.gz linux-stable-2761fff65fbf59cd77346fc63b76f651cf0f97d2.tar.bz2 linux-stable-2761fff65fbf59cd77346fc63b76f651cf0f97d2.zip |
mm/memcg: fix device private memcg accounting
[ Upstream commit 9a137153fc8798a89d8fce895cd0a06ea5b8e37c ]
The code in mc_handle_swap_pte() checks for non_swap_entry() and returns
NULL before checking is_device_private_entry() so device private pages are
never handled. Fix this by checking for non_swap_entry() after handling
device private swap PTEs.
I assume the memory cgroup accounting would be off somehow when moving
a process to another memory cgroup. Currently, the device private page
is charged like a normal anonymous page when allocated and is uncharged
when the page is freed so I think that path is OK.
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Link: https://lkml.kernel.org/r/20201009215952.2726-1-rcampbell@nvidia.com
xFixes: c733a82874a7 ("mm/memcontrol: support MEMORY_DEVICE_PRIVATE")
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/memcontrol.c | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index aa730a3d5c25..87cd5bf1b487 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4780,7 +4780,7 @@ static struct page *mc_handle_swap_pte(struct vm_area_struct *vma, struct page *page = NULL; swp_entry_t ent = pte_to_swp_entry(ptent); - if (!(mc.flags & MOVE_ANON) || non_swap_entry(ent)) + if (!(mc.flags & MOVE_ANON)) return NULL; /* @@ -4799,6 +4799,9 @@ static struct page *mc_handle_swap_pte(struct vm_area_struct *vma, return page; } + if (non_swap_entry(ent)) + return NULL; + /* * Because lookup_swap_cache() updates some statistics counter, * we call find_get_page() with swapper_space directly. |