summaryrefslogtreecommitdiffstats
path: root/virt
diff options
context:
space:
mode:
authorTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>2012-01-04 15:06:43 +0900
committerAvi Kivity <avi@redhat.com>2012-02-01 11:42:32 +0200
commit50e92b3c971129c96a5fffb51dd42691e2ee4004 (patch)
treeb4fc3f8557b889aeeda8e09f75f67e54e2719191 /virt
parentf8275f9694b8adf9f3498e747ea4c3e8b984499b (diff)
downloadlinux-stable-50e92b3c971129c96a5fffb51dd42691e2ee4004.tar.gz
linux-stable-50e92b3c971129c96a5fffb51dd42691e2ee4004.tar.bz2
linux-stable-50e92b3c971129c96a5fffb51dd42691e2ee4004.zip
KVM: Fix __set_bit() race in mark_page_dirty() during dirty logging
It is possible that the __set_bit() in mark_page_dirty() is called simultaneously on the same region of memory, which may result in only one bit being set, because some callers do not take mmu_lock before mark_page_dirty(). This problem is hard to produce because when we reach mark_page_dirty() beginning from, e.g., tdp_page_fault(), mmu_lock is being held during __direct_map(): making kvm-unit-tests' dirty log api test write to two pages concurrently was not useful for this reason. So we have confirmed that there can actually be race condition by checking if some callers really reach there without holding mmu_lock using spin_is_locked(): probably they were from kvm_write_guest_page(). To fix this race, this patch changes the bit operation to the atomic version: note that nr_dirty_pages also suffers from the race but we do not need exactly correct numbers for now. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Diffstat (limited to 'virt')
-rw-r--r--virt/kvm/kvm_main.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 7287bf5d1c9e..a91f980077d8 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1543,7 +1543,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot,
if (memslot && memslot->dirty_bitmap) {
unsigned long rel_gfn = gfn - memslot->base_gfn;
- if (!__test_and_set_bit_le(rel_gfn, memslot->dirty_bitmap))
+ if (!test_and_set_bit_le(rel_gfn, memslot->dirty_bitmap))
memslot->nr_dirty_pages++;
}
}