summaryrefslogtreecommitdiffstats
path: root/virt
Commit message (Collapse)AuthorAgeFilesLines
* KVM: IOAPIC: Fix level-triggered irq injection hangMark McLoughlin2008-07-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The "remote_irr" variable is used to indicate an interrupt which has been received by the LAPIC, but not acked. In our EOI handler, we unset remote_irr and re-inject the interrupt if the interrupt line is still asserted. However, we do not set remote_irr here, leading to a situation where if kvm_ioapic_set_irq() is called, then we go ahead and call ioapic_service(). This means that IRR is re-asserted even though the interrupt is currently in service (i.e. LAPIC IRR is cleared and ISR/TMR set) The issue with this is that when the currently executing interrupt handler finishes and writes LAPIC EOI, then TMR is unset and EOI sent to the IOAPIC. Since IRR is now asserted, but TMR is not, then when the second interrupt is handled, no EOI is sent and if there is any pending interrupt, it is not re-injected. This fixes a hang only seen while running mke2fs -j on an 8Gb virtio disk backed by a fully sparse raw file, with aliguori "avoid fragmented virtio-blk transfers by copying" changes. Signed-off-by: Mark McLoughlin <markmc@redhat.com> Acked-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: ioapic: fix lost interrupt when changing a device's irqAvi Kivity2008-06-241-20/+11
| | | | | | | | | | | | | The ioapic acknowledge path translates interrupt vectors to irqs. It currently uses a first match algorithm, stopping when it finds the first redirection table entry containing the vector. That fails however if the guest changes the irq to a different line, leaving the old redirection table entry in place (though masked). Result is interrupts not making it to the guest. Fix by always scanning the entire redirection table. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: IOAPIC: only set remote_irr if interrupt was injectedMarcelo Tosatti2008-06-061-10/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | There's a bug in the IOAPIC code for level-triggered interrupts. Its relatively easy to trigger by sharing (virtio-blk + usbtablet was the testcase, initially reported by Gerd von Egidy). The "remote_irr" variable is used to indicate accepted but not yet acked interrupts. Its cleared from the EOI handler. Problem is that the EOI handler clears remote_irr unconditionally, even if it reinjected another pending interrupt. In that case, kvm_ioapic_set_irq() proceeds to ioapic_service() which sets remote_irr even if it failed to inject (since the IRR was high due to EOI reinjection). Since the TMR bit has been cleared by the first EOI, the second one fails to clear remote_irr. End result is interrupt line dead. Fix it by setting remote_irr only if a new pending interrupt has been generated (and the TMR bit for vector in question set). Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Fix kvm_vcpu_block() task state raceMarcelo Tosatti2008-05-181-14/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | There's still a race in kvm_vcpu_block(), if a wake_up_interruptible() call happens before the task state is set to TASK_INTERRUPTIBLE: CPU0 CPU1 kvm_vcpu_block add_wait_queue kvm_cpu_has_interrupt = 0 set interrupt if (waitqueue_active()) wake_up_interruptible() kvm_cpu_has_pending_timer kvm_arch_vcpu_runnable signal_pending set_current_state(TASK_INTERRUPTIBLE) schedule() Can be fixed by using prepare_to_wait() which sets the task state before testing for the wait condition. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Export necessary function for EPTSheng Yang2008-05-041-0/+1
| | | | | Signed-off-by: Sheng Yang <sheng.yang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* [PATCH] sanitize anon_inode_getfd()Al Viro2008-05-011-16/+5
| | | | | | | | | a) none of the callers even looks at inode or file returned by anon_inode_getfd() b) any caller that would try to look at those would be racy, since by the time it returns we might have raced with close() from another thread and that file would be pining for fjords. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* KVM: kill file->f_count abuse in kvmAl Viro2008-04-271-6/+6
| | | | | | | | | | | Use kvm own refcounting instead of playing with ->filp->f_count. That will allow to get rid of a lot of crap in anon_inode_getfd() and kill a race in kvm_dev_ioctl_create_vm() (file might have been closed immediately by another thread, so ->filp might point to already freed struct file when we get around to setting it). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Rename debugfs_dir to kvm_debugfs_dirHollis Blanchard2008-04-272-6/+6
| | | | | | | It's a globally exported symbol now. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: add ioctls to save/store mpstateMarcelo Tosatti2008-04-271-0/+24
| | | | | | | | | | | | | So userspace can save/restore the mpstate during migration. [avi: export the #define constants describing the value] [christian: add s390 stubs] [avi: ditto for ia64] Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Carsten Otte <cotte@de.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: hlt emulation should take in-kernel APIC/PIT timers into accountMarcelo Tosatti2008-04-271-0/+1
| | | | | | | | | | | | | | | | | | Timers that fire between guest hlt and vcpu_block's add_wait_queue() are ignored, possibly resulting in hangs. Also make sure that atomic_inc and waitqueue_active tests happen in the specified order, otherwise the following race is open: CPU0 CPU1 if (waitqueue_active(wq)) add_wait_queue() if (!atomic_read(pit_timer->pending)) schedule() atomic_inc(pit_timer->pending) Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Add kvm trace userspace interfaceFeng(Eric) Liu2008-04-272-1/+283
| | | | | | | | This interface allows user a space application to read the trace of kvm related events through relayfs. Signed-off-by: Feng (Eric) Liu <eric.e.liu@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Don't assume struct page for x86Anthony Liguori2008-04-271-7/+61
| | | | | | | | | | | | | | | | | | | | | | | | This patch introduces a gfn_to_pfn() function and corresponding functions like kvm_release_pfn_dirty(). Using these new functions, we can modify the x86 MMU to no longer assume that it can always get a struct page for any given gfn. We don't want to eliminate gfn_to_page() entirely because a number of places assume they can do gfn_to_page() and then kmap() the results. When we support IO memory, gfn_to_page() will fail for IO pages although gfn_to_pfn() will succeed. This does not implement support for avoiding reference counting for reserved RAM or for IO memory. However, it should make those things pretty straight forward. Since we're only introducing new common symbols, I don't think it will break the non-x86 architectures but I haven't tested those. I've tested Intel, AMD, NPT, and hugetlbfs with Windows and Linux guests. [avi: fix overflow when shifting left pfns by adding casts] Signed-off-by: Anthony Liguori <aliguori@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: add vm refcountingIzik Eidus2008-04-271-1/+16
| | | | | | | | | the main purpose of adding this functions is the abilaty to release the spinlock that protect the kvm list while still be able to do operations on a specific kvm in a safe way. Signed-off-by: Izik Eidus <izike@qumranet.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Use kzalloc to avoid allocating kvm_regs from kernel stackXiantao Zhang2008-04-271-11/+22
| | | | | | | | Since the size of kvm_regs is too big to allocate from kernel stack on ia64, use kzalloc to allocate it. Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: large page supportMarcelo Tosatti2008-04-271-1/+24
| | | | | | | | | | | | | | Create large pages mappings if the guest PTE's are marked as such and the underlying memory is hugetlbfs backed. If the largepage contains write-protected pages, a large pte is not used. Gives a consistent 2% improvement for data copies on ram mounted filesystem, without NPT/EPT. Anthony measures a 4% improvement on 4-way kernbench, with NPT. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: ignore zapped root pagetablesMarcelo Tosatti2008-04-271-0/+23
| | | | | | | | | | | | Mark zapped root pagetables as invalid and ignore such pages during lookup. This is a problem with the cr3-target feature, where a zapped root table fools the faulting code into creating a read-only mapping. The result is a lockup if the instruction can't be emulated. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Cc: Anthony Liguori <aliguori@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Disable pagefaults during copy_from_user_inatomic()Andrea Arcangeli2008-04-271-0/+2
| | | | | | | | With CONFIG_PREEMPT=n, this is needed in order to disable the fault-in code from sleeping. Signed-off-by: Andrea Arcangeli <andrea@qumranet.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Limit vcpu mmap size to one page on non-x86Avi Kivity2008-04-271-1/+4
| | | | | | | | The second page is only needed on archs that support pio. Noted by Carsten Otte. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Only x86 has pioAvi Kivity2008-04-271-0/+2
| | | | Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: constify function pointer tablesJan Engelhardt2008-04-271-2/+2
| | | | | Signed-off-by: Jan Engelhardt <jengelh@computergmbh.de> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Route irq 0 to vcpu 0 exclusivelyAvi Kivity2008-03-041-0/+8
| | | | | | | | | | Some Linux versions allow the timer interrupt to be processed by more than one cpu, leading to hangs due to tsc instability. Work around the issue by only disaptching the interrupt to vcpu 0. Problem analyzed (and patch tested) by Sheng Yang. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: remove the usage of the mmap_sem for the protection of the memory slots.Izik Eidus2008-03-041-2/+3
| | | | | | | | | This patch replaces the mmap_sem lock for the memory slots with a new kvm private lock, it is needed beacuse untill now there were cases where kvm accesses user memory while holding the mmap semaphore. Signed-off-by: Izik Eidus <izike@qumranet.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* libfs: allow error return from simple attributesChristoph Hellwig2008-02-081-8/+8
| | | | | | | | | | | | | | | | | Sometimes simple attributes might need to return an error, e.g. for acquiring a mutex interruptibly. In fact we have that situation in spufs already which is the original user of the simple attributes. This patch merged the temporarily forked attributes in spufs back into the main ones and allows to return errors. [akpm@linux-foundation.org: build fix] Signed-off-by: Christoph Hellwig <hch@lst.de> Cc: <stefano.brivio@polimi.it> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Greg KH <greg@kroah.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* KVM: MMU: Switch to mmu spinlockMarcelo Tosatti2008-01-301-2/+1
| | | | | | | | | | | Convert the synchronization of the shadow handling to a separate mmu_lock spinlock. Also guard fetch() by mmap_sem in read-mode to protect against alias and memslot changes. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Add kvm_read_guest_atomic()Marcelo Tosatti2008-01-301-0/+20
| | | | | | | | In preparation for a mmu spinlock, add kvm_read_guest_atomic() and use it in fetch() and prefetch_page(). Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Concurrent guest walkersMarcelo Tosatti2008-01-301-17/+5
| | | | | | | | | | | | | | | | | | Do not hold kvm->lock mutex across the entire pagefault code, only acquire it in places where it is necessary, such as mmu hash list, active list, rmap and parent pte handling. Allow concurrent guest walkers by switching walk_addr() to use mmap_sem in read-mode. And get rid of the lockless __gfn_to_page. [avi: move kvm_mmu_pte_write() locking inside the function] [avi: add locking for real mode] [avi: fix cmpxchg locking] Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Move ioapic code to common directory.Zhang Xiantao2008-01-302-0/+498
| | | | | | | Move ioapic code to common, since IA64 also needs it. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Move drivers/kvm/* to virt/kvm/Avi Kivity2008-01-302-0/+1456
Signed-off-by: Avi Kivity <avi@qumranet.com>