summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorYu Zhao <yuzhao@google.com>2019-02-12 15:35:58 -0800
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2019-03-23 14:35:12 +0100
commit538162d21ac877b060dc057c89f13718f5caffc5 (patch)
tree9202915a0c6f6f0276d3ef3db37393a1e0322772 /mm
parentfc4b12f3cad776dff7d56d9f08a96589859feac5 (diff)
downloadlinux-stable-538162d21ac877b060dc057c89f13718f5caffc5.tar.gz
linux-stable-538162d21ac877b060dc057c89f13718f5caffc5.tar.bz2
linux-stable-538162d21ac877b060dc057c89f13718f5caffc5.zip
mm/gup: fix gup_pmd_range() for dax
[ Upstream commit 414fd080d125408cb15d04ff4907e1dd8145c8c7 ] For dax pmd, pmd_trans_huge() returns false but pmd_huge() returns true on x86. So the function works as long as hugetlb is configured. However, dax doesn't depend on hugetlb. Link: http://lkml.kernel.org/r/20190111034033.601-1-yuzhao@google.com Signed-off-by: Yu Zhao <yuzhao@google.com> Reviewed-by: Jan Kara <jack@suse.cz> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Keith Busch <keith.busch@intel.com> Cc: "Michael S . Tsirkin" <mst@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Wei Yang <richard.weiyang@gmail.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/gup.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/gup.c b/mm/gup.c
index 4cc8a6ff0f56..7c0e5b1bbcd4 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1643,7 +1643,8 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end,
if (!pmd_present(pmd))
return 0;
- if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd))) {
+ if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
+ pmd_devmap(pmd))) {
/*
* NUMA hinting faults need to be handled in the GUP
* slowpath for accounting purposes and so that they