summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorPaolo Bonzini <pbonzini@redhat.com>2022-04-08 13:09:04 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2022-04-20 09:08:29 +0200
commiteeaf28e2a0128147d687237e59d5407ee1b14693 (patch)
treeee9df3d4b7275286d27822ddc55f4fc5b07128d2
parent053435146f9eca9d12eab9cecdb7ddb7e8a261d1 (diff)
downloadlinux-stable-eeaf28e2a0128147d687237e59d5407ee1b14693.tar.gz
linux-stable-eeaf28e2a0128147d687237e59d5407ee1b14693.tar.bz2
linux-stable-eeaf28e2a0128147d687237e59d5407ee1b14693.zip
mmmremap.c: avoid pointless invalidate_range_start/end on mremap(old_size=0)
commit 01e67e04c28170c47700c2c226d732bbfedb1ad0 upstream. If an mremap() syscall with old_size=0 ends up in move_page_tables(), it will call invalidate_range_start()/invalidate_range_end() unnecessarily, i.e. with an empty range. This causes a WARN in KVM's mmu_notifier. In the past, empty ranges have been diagnosed to be off-by-one bugs, hence the WARNing. Given the low (so far) number of unique reports, the benefits of detecting more buggy callers seem to outweigh the cost of having to fix cases such as this one, where userspace is doing something silly. In this particular case, an early return from move_page_tables() is enough to fix the issue. Link: https://lkml.kernel.org/r/20220329173155.172439-1-pbonzini@redhat.com Reported-by: syzbot+6bde52d89cfdf9f61425@syzkaller.appspotmail.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r--mm/mremap.c3
1 files changed, 3 insertions, 0 deletions
diff --git a/mm/mremap.c b/mm/mremap.c
index 3c7fcd5d5794..2bdb255cde9a 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -203,6 +203,9 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
unsigned long mmun_start; /* For mmu_notifiers */
unsigned long mmun_end; /* For mmu_notifiers */
+ if (!len)
+ return 0;
+
old_end = old_addr + len;
flush_cache_range(vma, old_addr, old_end);