summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorandrew.yang <andrew.yang@mediatek.com>2023-02-22 14:42:20 +0800
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2023-03-10 09:34:20 +0100
commitdaa5a586e43aab6ee8a0b382d1a9c98e52583b4b (patch)
tree0dd3ab9ad2e38d4355cdcd5123956cef42bd70d2 /mm
parentdc3809f390357c8992f0a23083da934a20fef9af (diff)
downloadlinux-stable-daa5a586e43aab6ee8a0b382d1a9c98e52583b4b.tar.gz
linux-stable-daa5a586e43aab6ee8a0b382d1a9c98e52583b4b.tar.bz2
linux-stable-daa5a586e43aab6ee8a0b382d1a9c98e52583b4b.zip
mm/damon/paddr: fix missing folio_put()
commit 3f98c9a62c338bbe06a215c9491e6166ea39bf82 upstream. damon_get_folio() would always increase folio _refcount and folio_isolate_lru() would increase folio _refcount if the folio's lru flag is set. If an unevictable folio isolated successfully, there will be two more _refcount. The one from folio_isolate_lru() will be decreased in folio_puback_lru(), but the other one from damon_get_folio() will be left behind. This causes a pin page. Whatever the case, the _refcount from damon_get_folio() should be decreased. Link: https://lkml.kernel.org/r/20230222064223.6735-1-andrew.yang@mediatek.com Fixes: 57223ac29584 ("mm/damon/paddr: support the pageout scheme") Signed-off-by: andrew.yang <andrew.yang@mediatek.com> Reviewed-by: SeongJae Park <sj@kernel.org> Cc: <stable@vger.kernel.org> [5.16.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: SeongJae Park <sj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/damon/paddr.c7
1 files changed, 3 insertions, 4 deletions
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index e1a4315c4be6..402d30b37aba 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -219,12 +219,11 @@ static unsigned long damon_pa_pageout(struct damon_region *r)
put_page(page);
continue;
}
- if (PageUnevictable(page)) {
+ if (PageUnevictable(page))
putback_lru_page(page);
- } else {
+ else
list_add(&page->lru, &page_list);
- put_page(page);
- }
+ put_page(page);
}
applied = reclaim_pages(&page_list);
cond_resched();