summaryrefslogtreecommitdiffstats
path: root/mm/hmm.c
diff options
context:
space:
mode:
authorLi Zhijian <lizhijian@cn.fujitsu.com>2021-09-08 18:10:02 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2021-09-08 18:45:52 -0700
commit4b42fb213678d2b6a9eeea92a9be200f23e49583 (patch)
tree6cf5e876461032af8b12e96a20514bbcf6627430 /mm/hmm.c
parent2d338201d5311bcd79d42f66df4cecbcbc5f4f2c (diff)
downloadlinux-stable-4b42fb213678d2b6a9eeea92a9be200f23e49583.tar.gz
linux-stable-4b42fb213678d2b6a9eeea92a9be200f23e49583.tar.bz2
linux-stable-4b42fb213678d2b6a9eeea92a9be200f23e49583.zip
mm/hmm: bypass devmap pte when all pfn requested flags are fulfilled
Previously, we noticed the one rpma example was failed[1] since commit 36f30e486dce ("IB/core: Improve ODP to use hmm_range_fault()"), where it will use ODP feature to do RDMA WRITE between fsdax files. After digging into the code, we found hmm_vma_handle_pte() will still return EFAULT even though all the its requesting flags has been fulfilled. That's because a DAX page will be marked as (_PAGE_SPECIAL | PAGE_DEVMAP) by pte_mkdevmap(). Link: https://github.com/pmem/rpma/issues/1142 [1] Link: https://lkml.kernel.org/r/20210830094232.203029-1-lizhijian@cn.fujitsu.com Fixes: 405506274922 ("mm/hmm: add missing call to hmm_pte_need_fault in HMM_PFN_SPECIAL handling") Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/hmm.c')
-rw-r--r--mm/hmm.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/mm/hmm.c b/mm/hmm.c
index fad6be2bf072..842e26599238 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -295,10 +295,13 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
goto fault;
/*
+ * Bypass devmap pte such as DAX page when all pfn requested
+ * flags(pfn_req_flags) are fulfilled.
* Since each architecture defines a struct page for the zero page, just
* fall through and treat it like a normal page.
*/
- if (pte_special(pte) && !is_zero_pfn(pte_pfn(pte))) {
+ if (pte_special(pte) && !pte_devmap(pte) &&
+ !is_zero_pfn(pte_pfn(pte))) {
if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) {
pte_unmap(ptep);
return -EFAULT;