summaryrefslogtreecommitdiffstats
path: root/mm/hmm.c
diff options
context:
space:
mode:
authorYang, Philip <Philip.Yang@amd.com>2019-08-15 20:52:56 +0000
committerJason Gunthorpe <jgg@mellanox.com>2019-08-23 10:22:20 -0300
commite3fe8e555dd05cf74168d18555c44320ed50a0e1 (patch)
tree93a7d05212b8be9eabffd7ecb0b4358f061c2293 /mm/hmm.c
parentc96245148c1ec7af086da322481bf4119d1141d3 (diff)
downloadlinux-stable-e3fe8e555dd05cf74168d18555c44320ed50a0e1.tar.gz
linux-stable-e3fe8e555dd05cf74168d18555c44320ed50a0e1.tar.bz2
linux-stable-e3fe8e555dd05cf74168d18555c44320ed50a0e1.zip
mm/hmm: fix hmm_range_fault()'s handling of swapped out pages
hmm_range_fault() may return NULL pages because some of the pfns are equal to HMM_PFN_NONE. This happens randomly under memory pressure. The reason is during the swapped out page pte path, hmm_vma_handle_pte() doesn't update the fault variable from cpu_flags, so it failed to call hmm_vam_do_fault() to swap the page in. The fix is to call hmm_pte_need_fault() to update fault variable. Fixes: 74eee180b935 ("mm/hmm/mirror: device page fault handler") Link: https://lore.kernel.org/r/20190815205227.7949-1-Philip.Yang@amd.com Signed-off-by: Philip Yang <Philip.Yang@amd.com> Reviewed-by: "Jérôme Glisse" <jglisse@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Diffstat (limited to 'mm/hmm.c')
-rw-r--r--mm/hmm.c3
1 files changed, 3 insertions, 0 deletions
diff --git a/mm/hmm.c b/mm/hmm.c
index 49eace16f9f8..fc05c8fe78b4 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -469,6 +469,9 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
swp_entry_t entry = pte_to_swp_entry(pte);
if (!non_swap_entry(entry)) {
+ cpu_flags = pte_to_hmm_pfn_flags(range, pte);
+ hmm_pte_need_fault(hmm_vma_walk, orig_pfn, cpu_flags,
+ &fault, &write_fault);
if (fault || write_fault)
goto fault;
return 0;