diff options
author | Hugh Dickins <hughd@google.com> | 2018-06-01 16:50:50 -0700 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2018-06-06 16:46:23 +0200 |
commit | d71f830d8cf792e51197a2291cf897372c49395a (patch) | |
tree | 3a4de9f1bf55fc686979e0179d50d298f61b81d7 /mm | |
parent | 914812331dbb6d61ab848b64c9b2d7847f299dfb (diff) | |
download | linux-stable-d71f830d8cf792e51197a2291cf897372c49395a.tar.gz linux-stable-d71f830d8cf792e51197a2291cf897372c49395a.tar.bz2 linux-stable-d71f830d8cf792e51197a2291cf897372c49395a.zip |
mm: fix the NULL mapping case in __isolate_lru_page()
commit 145e1a71e090575c74969e3daa8136d1e5b99fc8 upstream.
George Boole would have noticed a slight error in 4.16 commit
69d763fc6d3a ("mm: pin address_space before dereferencing it while
isolating an LRU page"). Fix it, to match both the comment above it,
and the original behaviour.
Although anonymous pages are not marked PageDirty at first, we have an
old habit of calling SetPageDirty when a page is removed from swap
cache: so there's a category of ex-swap pages that are easily
migratable, but were inadvertently excluded from compaction's async
migration in 4.16.
Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1805302014001.12558@eggly.anvils
Fixes: 69d763fc6d3a ("mm: pin address_space before dereferencing it while isolating an LRU page")
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Reported-by: Ivan Kalvachev <ikalvachev@gmail.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/vmscan.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index b58ca729f20a..76853088f66b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1331,7 +1331,7 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode) return ret; mapping = page_mapping(page); - migrate_dirty = mapping && mapping->a_ops->migratepage; + migrate_dirty = !mapping || mapping->a_ops->migratepage; unlock_page(page); if (!migrate_dirty) return ret; |