summaryrefslogtreecommitdiffstats
path: root/mm/shmem.c
diff options
context:
space:
mode:
authorAndrea Arcangeli <aarcange@redhat.com>2018-11-30 14:09:43 -0800
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2018-12-08 13:03:37 +0100
commit46466e23bc1bc262c28a020df982233c309ed958 (patch)
tree418e7fcc33bfd322c5fcbb7607d3a1e1a15dc0b5 /mm/shmem.c
parentaf3edb30cddfcc1eace505af10d08826a9522dbe (diff)
downloadlinux-stable-46466e23bc1bc262c28a020df982233c309ed958.tar.gz
linux-stable-46466e23bc1bc262c28a020df982233c309ed958.tar.bz2
linux-stable-46466e23bc1bc262c28a020df982233c309ed958.zip
userfaultfd: shmem: UFFDIO_COPY: set the page dirty if VM_WRITE is not set
commit dcf7fe9d89763a28e0f43975b422ff141fe79e43 upstream. Set the page dirty if VM_WRITE is not set because in such case the pte won't be marked dirty and the page would be reclaimed without writepage (i.e. swapout in the shmem case). This was found by source review. Most apps (certainly including QEMU) only use UFFDIO_COPY on PROT_READ|PROT_WRITE mappings or the app can't modify the memory in the first place. This is for correctness and it could help the non cooperative use case to avoid unexpected data loss. Link: http://lkml.kernel.org/r/20181126173452.26955-6-aarcange@redhat.com Reviewed-by: Hugh Dickins <hughd@google.com> Cc: stable@vger.kernel.org Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support") Reported-by: Hugh Dickins <hughd@google.com> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Peter Xu <peterx@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm/shmem.c')
-rw-r--r--mm/shmem.c11
1 files changed, 11 insertions, 0 deletions
diff --git a/mm/shmem.c b/mm/shmem.c
index 581e7baf0478..6c10f1d92251 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2305,6 +2305,16 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
_dst_pte = mk_pte(page, dst_vma->vm_page_prot);
if (dst_vma->vm_flags & VM_WRITE)
_dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte));
+ else {
+ /*
+ * We don't set the pte dirty if the vma has no
+ * VM_WRITE permission, so mark the page dirty or it
+ * could be freed from under us. We could do it
+ * unconditionally before unlock_page(), but doing it
+ * only if VM_WRITE is not set is faster.
+ */
+ set_page_dirty(page);
+ }
dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);
@@ -2338,6 +2348,7 @@ out:
return ret;
out_release_uncharge_unlock:
pte_unmap_unlock(dst_pte, ptl);
+ ClearPageDirty(page);
delete_from_page_cache(page);
out_release_uncharge:
mem_cgroup_cancel_charge(page, memcg, false);