summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'kvm-x86-misc-6.9' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini2024-03-1115-177/+184
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | KVM x86 misc changes for 6.9: - Explicitly initialize a variety of on-stack variables in the emulator that triggered KMSAN false positives (though in fairness in KMSAN, it's comically difficult to see that the uninitialized memory is never truly consumed). - Fix the deubgregs ABI for 32-bit KVM, and clean up code related to reading DR6 and DR7. - Rework the "force immediate exit" code so that vendor code ultimately decides how and when to force the exit. This allows VMX to further optimize handling preemption timer exits, and allows SVM to avoid sending a duplicate IPI (SVM also has a need to force an exit). - Fix a long-standing bug where kvm_has_noapic_vcpu could be left elevated if vCPU creation ultimately failed, and add WARN to guard against similar bugs. - Provide a dedicated arch hook for checking if a different vCPU was in-kernel (for directed yield), and simplify the logic for checking if the currently loaded vCPU is in-kernel. - Misc cleanups and fixes.
| * KVM: SVM: Rename vmplX_ssp -> plX_sspJohn Allen2024-02-271-4/+4
| | | | | | | | | | | | | | | | | | | | | | The SSP fields in the SEV-ES save area were mistakenly named vmplX_ssp instead of plX_ssp. Rename these to the correct names as defined in the APM. Fixes: 6d3b3d34e39e ("KVM: SVM: Update the SEV-ES save area mapping") Signed-off-by: John Allen <john.allen@amd.com> Link: https://lore.kernel.org/r/20240227200356.35114-1-john.allen@amd.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: Add a comment explaining the directed yield pending interrupt logicSean Christopherson2024-02-221-0/+7
| | | | | | | | | | | | | | | | | | Add a comment to explain why KVM treats vCPUs with pending interrupts as in-kernel when a vCPU wants to yield to a vCPU that was preempted while running in kernel mode. Link: https://lore.kernel.org/r/20240110003938.490206-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Clean up directed yield API for "has pending interrupt"Sean Christopherson2024-02-221-5/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Directly return the boolean result of whether or not a vCPU has a pending interrupt instead of effectively doing: if (true) return true; return false; Reviewed-by: Yuan Yao <yuan.yao@intel.com> Link: https://lore.kernel.org/r/20240110003938.490206-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Rely solely on preempted_in_kernel flag for directed yieldSean Christopherson2024-02-221-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Snapshot preempted_in_kernel using kvm_arch_vcpu_in_kernel() so that the flag is "accurate" (or rather, consistent and deterministic within KVM) for guests with protected state, and explicitly use preempted_in_kernel when checking if a vCPU was preempted in kernel mode instead of bouncing through kvm_arch_vcpu_in_kernel(). Drop the gnarly logic in kvm_arch_vcpu_in_kernel() that redirects to preempted_in_kernel if the target vCPU is not the "running", i.e. loaded, vCPU, as the only reason that code existed was for the directed yield case where KVM wants to check the CPL of a vCPU that may or may not be loaded on the current pCPU. Cc: Like Xu <like.xu.linux@gmail.com> Reviewed-by: Yuan Yao <yuan.yao@intel.com> Link: https://lore.kernel.org/r/20240110003938.490206-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: Add dedicated arch hook for querying if vCPU was preempted in-kernelSean Christopherson2024-02-223-1/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Plumb in a dedicated hook for querying whether or not a vCPU was preempted in-kernel. Unlike literally every other architecture, x86's VMX can check if a vCPU is in kernel context if and only if the vCPU is loaded on the current pCPU. x86's kvm_arch_vcpu_in_kernel() works around the limitation by querying kvm_get_running_vcpu() and redirecting to vcpu->arch.preempted_in_kernel as needed. But that's unnecessary, confusing, and fragile, e.g. x86 has had at least one bug where KVM incorrectly used a stale preempted_in_kernel. No functional change intended. Reviewed-by: Yuan Yao <yuan.yao@intel.com> Link: https://lore.kernel.org/r/20240110003938.490206-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Sanity check that kvm_has_noapic_vcpu is zero at module_exit()Sean Christopherson2024-02-221-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WARN if kvm.ko is unloaded with an elevated kvm_has_noapic_vcpu to guard against incorrect management of the key, e.g. to detect if KVM fails to decrement the key in error paths. Because kvm_has_noapic_vcpu is purely an optimization, in all likelihood KVM could completely botch handling of kvm_has_noapic_vcpu and no one would notice (which is a good argument for deleting the key entirely, but that's a problem for another day). Note, ideally the sanity check would be performance when kvm_usage_count goes to zero, but adding an arch callback just for this sanity check isn't at all worth doing. Link: https://lore.kernel.org/r/20240209222047.394389-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Move "KVM no-APIC vCPU" key management into local APIC codeSean Christopherson2024-02-222-27/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move incrementing and decrementing of kvm_has_noapic_vcpu into kvm_create_lapic() and kvm_free_lapic() respectively to fix a benign bug where KVM fails to decrement the count if vCPU creation ultimately fails, e.g. due to a memory allocation failing. Note, the bug is benign as kvm_has_noapic_vcpu is used purely to optimize lapic_in_kernel() checks, and that optimization is quite dubious. That, and practically speaking no setup that cares at all about performance runs with a userspace local APIC. Reported-by: Li RongQing <lirongqing@baidu.com> Cc: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Xu Yilun <yilun.xu@linux.intel.com> Link: https://lore.kernel.org/r/20240209222047.394389-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Fully defer to vendor code to decide how to force immediate exitSean Christopherson2024-02-226-36/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that vmx->req_immediate_exit is used only in the scope of vmx_vcpu_run(), use force_immediate_exit to detect that KVM should usurp the VMX preemption to force a VM-Exit and let vendor code fully handle forcing a VM-Exit. Opportunsitically drop __kvm_request_immediate_exit() and just have vendor code call smp_send_reschedule() directly. SVM already does this when injecting an event while also trying to single-step an IRET, i.e. it's not exactly secret knowledge that KVM uses a reschedule IPI to force an exit. Link: https://lore.kernel.org/r/20240110012705.506918-7-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: VMX: Handle KVM-induced preemption timer exits in fastpath for L2Sean Christopherson2024-02-221-2/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | Eat VMX treemption timer exits in the fastpath regardless of whether L1 or L2 is active. The VM-Exit is 100% KVM-induced, i.e. there is nothing directly related to the exit that KVM needs to do on behalf of the guest, thus there is no reason to wait until the slow path to do nothing. Opportunistically add comments explaining why preemption timer exits for emulating the guest's APIC timer need to go down the slow path. Link: https://lore.kernel.org/r/20240110012705.506918-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Move handling of is_guest_mode() into fastpath exit handlersSean Christopherson2024-02-222-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let the fastpath code decide which exits can/can't be handled in the fastpath when L2 is active, e.g. when KVM generates a VMX preemption timer exit to forcefully regain control, there is no "work" to be done and so such exits can be handled in the fastpath regardless of whether L1 or L2 is active. Moving the is_guest_mode() check into the fastpath code also makes it easier to see that L2 isn't allowed to use the fastpath in most cases, e.g. it's not immediately obvious why handle_fastpath_preemption_timer() is called from the fastpath and the normal path. Link: https://lore.kernel.org/r/20240110012705.506918-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: VMX: Handle forced exit due to preemption timer in fastpathSean Christopherson2024-02-221-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Handle VMX preemption timer VM-Exits due to KVM forcing an exit in the exit fastpath, i.e. avoid calling back into handle_preemption_timer() for the same exit. There is no work to be done for forced exits, as the name suggests the goal is purely to get control back in KVM. In addition to shaving a few cycles, this will allow cleanly separating handle_fastpath_preemption_timer() from handle_preemption_timer(), e.g. it's not immediately obvious why _apparently_ calling handle_fastpath_preemption_timer() twice on a "slow" exit is necessary: the "slow" call is necessary to handle exits from L2, which are excluded from the fastpath by vmx_vcpu_run(). Link: https://lore.kernel.org/r/20240110012705.506918-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: VMX: Re-enter guest in fastpath for "spurious" preemption timer exitsSean Christopherson2024-02-221-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | Re-enter the guest in the fast path if VMX preeemption timer VM-Exit was "spurious", i.e. if KVM "soft disabled" the timer by writing -1u and by some miracle the timer expired before any other VM-Exit occurred. This is just an intermediate step to cleaning up the preemption timer handling, optimizing these types of spurious VM-Exits is not interesting as they are extremely rare/infrequent. Link: https://lore.kernel.org/r/20240110012705.506918-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Plumb "force_immediate_exit" into kvm_entry() tracepointSean Christopherson2024-02-225-9/+14
| | | | | | | | | | | | | | | | | | | | | | | | Annotate the kvm_entry() tracepoint with "immediate exit" when KVM is forcing a VM-Exit immediately after VM-Enter, e.g. when KVM wants to inject an event but needs to first complete some other operation. Knowing that KVM is (or isn't) forcing an exit is useful information when debugging issues related to event injection. Suggested-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20240110012705.506918-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Open code all direct reads to guest DR6 and DR7Sean Christopherson2024-02-223-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bite the bullet, and open code all direct reads of DR6 and DR7. KVM currently has a mix of open coded accesses and calls to kvm_get_dr(), which is confusing and ugly because there's no rhyme or reason as to why any particular chunk of code uses kvm_get_dr(). The obvious alternative is to force all accesses through kvm_get_dr(), but it's not at all clear that doing so would be a net positive, e.g. even if KVM ends up wanting/needing to force all reads through a common helper, e.g. to play caching games, the cost of reverting this change is likely lower than the ongoing cost of maintaining weird, arbitrary code. No functional change intended. Cc: Mathias Krause <minipli@grsecurity.net> Reviewed-by: Mathias Krause <minipli@grsecurity.net> Link: https://lore.kernel.org/r/20240209220752.388160-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Make kvm_get_dr() return a value, not use an out parameterSean Christopherson2024-02-228-49/+21
| | | | | | | | | | | | | | | | | | | | | | | | Convert kvm_get_dr()'s output parameter to a return value, and clean up most of the mess that was created by forcing callers to provide a pointer. No functional change intended. Acked-by: Mathias Krause <minipli@grsecurity.net> Reviewed-by: Mathias Krause <minipli@grsecurity.net> Link: https://lore.kernel.org/r/20240209220752.388160-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: rename push to emulate_push for consistencyJulian Stecklina2024-02-071-5/+5
| | | | | | | | | | | | | | | | | | | | push and emulate_pop are counterparts. Rename push to emulate_push and harmonize its function signature with emulate_pop. This should remove a bit of cognitive load when reading this code. Signed-off-by: Julian Stecklina <julian.stecklina@cyberus-technology.de> Link: https://lore.kernel.org/r/20231009092054.556935-2-julian.stecklina@cyberus-technology.de Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Clean up partially uninitialized integer in emulate_pop()Julian Stecklina2024-02-071-6/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Explicitly zero out variables passed to emulate_pop() as output params to harden against consuming uninitialized data, and to make sanitizers happy. Many flows that use emulate_pop() pass an "unsigned long" so as to be able to hold the largest possible operand, but the actual number of bytes written is usually the word with of the vCPU. E.g. if the vCPU is in 16-bit or 32-bit mode (on a 64-bit host), the upper portion of the output param will be uninitialized. Passing around the uninitialized data is benign, as actual KVM usage of the output is also tied to the word width, but passing around uninitialized data makes some sanitizers rightly complain. Note, initializing the data in emulate_pop() is not a safe alternative, e.g. it would result in em_leave() clobbering RBP[31:16] if LEAVE were emulated with a 16-bit stack. Signed-off-by: Julian Stecklina <julian.stecklina@cyberus-technology.de> Link: https://lore.kernel.org/r/20231009092054.556935-1-julian.stecklina@cyberus-technology.de [sean: massage changelog, drop em_popa() variable size change]] Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86/emulator: emulate movbe with operand-size prefixThomas Prescher2024-02-071-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The MOVBE instruction can come with an operand-size prefix (66h). In this, case the x86 emulation code returns EMULATION_FAILED. It turns out that em_movbe can already handle this case and all that is missing is an entry in respective opcode tables to populate gprefix->pfx_66. Signed-off-by: Thomas Prescher <thomas.prescher@cyberus-technology.de> Signed-off-by: Julian Stecklina <julian.stecklina@cyberus-technology.de> Acked-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20231212095938.26731-1-julian.stecklina@cyberus-technology.de Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Fix broken debugregs ABI for 32 bit kernelsMathias Krause2024-02-051-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ioctl()s to get and set KVM's debug registers are broken for 32 bit kernels as they'd only copy half of the user register state because of a UAPI and in-kernel type mismatch (__u64 vs. unsigned long; 8 vs. 4 bytes). This makes it impossible for userland to set anything but DR0 without resorting to bit folding tricks. Switch to a loop for copying debug registers that'll implicitly do the type conversion for us, if needed. There are likely no users (left) for 32bit KVM, fix the bug nonetheless. Fixes: a1efbe77c1fd ("KVM: x86: Add support for saving&restoring debug registers") Signed-off-by: Mathias Krause <minipli@grsecurity.net> Link: https://lore.kernel.org/r/20240203124522.592778-4-minipli@grsecurity.net Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Use mutex guards to eliminate __kvm_x86_vendor_init()Nikolay Borisov2024-01-291-12/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the recently introduced guard(mutex) infrastructure acquire and automatically release vendor_module_lock when the guard goes out of scope. Drop the inner __kvm_x86_vendor_init(), its sole purpose was to simplify releasing vendor_module_lock in error paths. No functional change intended. Signed-off-by: Nikolay Borisov <nik.borisov@suse.com> Reviewed-by: Kai Huang <kai.huang@intel.com> Link: https://lore.kernel.org/r/20231030141728.1406118-1-nik.borisov@suse.com [sean: rewrite changelog] Signed-off-by: Sean Christopherson <seanjc@google.com>
* | Merge tag 'kvm-x86-generic-6.9' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini2024-03-111-2/+4
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | KVM common MMU changes for 6.9: - Harden KVM against underflowing the active mmu_notifier invalidation count, so that "bad" invalidations (usually due to bugs elsehwere in the kernel) are detected earlier and are less likely to hang the kernel. - Fix a benign bug in __kvm_mmu_topup_memory_cache() where the object size and number of objects parameters to kvmalloc_array() were swapped.
| * | KVM: fix kvm_mmu_memory_cache allocation warningArnd Bergmann2024-02-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gcc-14 notices that the arguments to kvmalloc_array() are mixed up: arch/x86/kvm/../../../virt/kvm/kvm_main.c: In function '__kvm_mmu_topup_memory_cache': arch/x86/kvm/../../../virt/kvm/kvm_main.c:424:53: error: 'kvmalloc_array' sizes specified with 'sizeof' in the earlier argument and not in the later argument [-Werror=calloc-transposed-args] 424 | mc->objects = kvmalloc_array(sizeof(void *), capacity, gfp); | ^~~~ arch/x86/kvm/../../../virt/kvm/kvm_main.c:424:53: note: earlier argument should specify number of elements, later size of each element The code still works correctly, but the incorrect order prevents the compiler from properly tracking the object sizes. Fixes: 837f66c71207 ("KVM: Allow for different capacities in kvm_mmu_memory_cache structs") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240212112419.1186065-1-arnd@kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: Harden against unpaired kvm_mmu_notifier_invalidate_range_end() callsSean Christopherson2024-01-291-1/+3
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When handling the end of an mmu_notifier invalidation, WARN if mn_active_invalidate_count is already 0 do not decrement it further, i.e. avoid causing mn_active_invalidate_count to underflow/wrap. In the worst case scenario, effectively corrupting mn_active_invalidate_count could cause kvm_swap_active_memslots() to hang indefinitely. end() calls are *supposed* to be paired with start(), i.e. underflow can only happen if there is a bug elsewhere in the kernel, but due to lack of lockdep assertions in the mmu_notifier helpers, it's all too easy for a bug to go unnoticed for some time, e.g. see the recently introduced PAGEMAP_SCAN ioctl(). Ideally, mmu_notifiers would incorporate lockdep assertions, but users of mmu_notifiers aren't required to hold any one specific lock, i.e. adding the necessary annotations to make lockdep aware of all locks that are mutally exclusive with mm_take_all_locks() isn't trivial. Link: https://lore.kernel.org/all/000000000000f6d051060c6785bc@google.com Link: https://lore.kernel.org/r/20240110004239.491290-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
* | Merge tag 'kvm-x86-asyncpf-6.9' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini2024-03-112-25/+49
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | KVM async page fault changes for 6.9: - Always flush the async page fault workqueue when a work item is being removed, especially during vCPU destruction, to ensure that there are no workers running in KVM code when all references to KVM-the-module are gone, i.e. to prevent a use-after-free if kvm.ko is unloaded. - Grab a reference to the VM's mm_struct in the async #PF worker itself instead of gifting the worker a reference, e.g. so that there's no need to remember to *conditionally* clean up after the worker.
| * | KVM: Nullify async #PF worker's "apf" pointer as soon as it might be freedSean Christopherson2024-02-061-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nullify the async #PF worker's local "apf" pointer immediately after the point where the structure can be freed by the vCPU. The existing comment is helpful, but easy to overlook as there is no associated code. Update the comment to clarify that it can be freed by as soon as the lock is dropped, as "after this point" isn't strictly accurate, nor does it help understand what prevents the structure from being freed earlier. Reviewed-by: Xu Yilun <yilun.xu@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Link: https://lore.kernel.org/r/20240110011533.503302-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: Get reference to VM's address space in the async #PF workerSean Christopherson2024-02-062-15/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Get a reference to the target VM's address space in async_pf_execute() instead of gifting a reference from kvm_setup_async_pf(). Keeping the address space alive just to service an async #PF is counter-productive, i.e. if the process is exiting and all vCPUs are dead, then NOT doing get_user_pages_remote() and freeing the address space asap is desirable. Handling the mm reference entirely within async_pf_execute() also simplifies the async #PF flows as a whole, e.g. it's not immediately obvious when the worker task vs. the vCPU task is responsible for putting the gifted mm reference. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Xu Yilun <yilun.xu@intel.com> Link: https://lore.kernel.org/r/20240110011533.503302-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: Put mm immediately after async #PF worker completes remote gup()Sean Christopherson2024-02-061-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Put the async #PF worker's reference to the VM's address space as soon as the worker is done with the mm. This will allow deferring getting a reference to the worker itself without having to track whether or not getting a reference succeeded. Note, if the vCPU is still alive, there is no danger of the worker getting stuck with tearing down the host page tables, as userspace also holds a reference (obviously), i.e. there is no risk of delaying the page-present notification due to triggering the slow path in mmput(). Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Xu Yilun <yilun.xu@intel.com> Link: https://lore.kernel.org/r/20240110011533.503302-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: Always flush async #PF workqueue when vCPU is being destroyedSean Christopherson2024-02-061-5/+26
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Always flush the per-vCPU async #PF workqueue when a vCPU is clearing its completion queue, e.g. when a VM and all its vCPUs is being destroyed. KVM must ensure that none of its workqueue callbacks is running when the last reference to the KVM _module_ is put. Gifting a reference to the associated VM prevents the workqueue callback from dereferencing freed vCPU/VM memory, but does not prevent the KVM module from being unloaded before the callback completes. Drop the misguided VM refcount gifting, as calling kvm_put_kvm() from async_pf_execute() if kvm_put_kvm() flushes the async #PF workqueue will result in deadlock. async_pf_execute() can't return until kvm_put_kvm() finishes, and kvm_put_kvm() can't return until async_pf_execute() finishes: WARNING: CPU: 8 PID: 251 at virt/kvm/kvm_main.c:1435 kvm_put_kvm+0x2d/0x320 [kvm] Modules linked in: vhost_net vhost vhost_iotlb tap kvm_intel kvm irqbypass CPU: 8 PID: 251 Comm: kworker/8:1 Tainted: G W 6.6.0-rc1-e7af8d17224a-x86/gmem-vm #119 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Workqueue: events async_pf_execute [kvm] RIP: 0010:kvm_put_kvm+0x2d/0x320 [kvm] Call Trace: <TASK> async_pf_execute+0x198/0x260 [kvm] process_one_work+0x145/0x2d0 worker_thread+0x27e/0x3a0 kthread+0xba/0xe0 ret_from_fork+0x2d/0x50 ret_from_fork_asm+0x11/0x20 </TASK> ---[ end trace 0000000000000000 ]--- INFO: task kworker/8:1:251 blocked for more than 120 seconds. Tainted: G W 6.6.0-rc1-e7af8d17224a-x86/gmem-vm #119 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/8:1 state:D stack:0 pid:251 ppid:2 flags:0x00004000 Workqueue: events async_pf_execute [kvm] Call Trace: <TASK> __schedule+0x33f/0xa40 schedule+0x53/0xc0 schedule_timeout+0x12a/0x140 __wait_for_common+0x8d/0x1d0 __flush_work.isra.0+0x19f/0x2c0 kvm_clear_async_pf_completion_queue+0x129/0x190 [kvm] kvm_arch_destroy_vm+0x78/0x1b0 [kvm] kvm_put_kvm+0x1c1/0x320 [kvm] async_pf_execute+0x198/0x260 [kvm] process_one_work+0x145/0x2d0 worker_thread+0x27e/0x3a0 kthread+0xba/0xe0 ret_from_fork+0x2d/0x50 ret_from_fork_asm+0x11/0x20 </TASK> If kvm_clear_async_pf_completion_queue() actually flushes the workqueue, then there's no need to gift async_pf_execute() a reference because all invocations of async_pf_execute() will be forced to complete before the vCPU and its VM are destroyed/freed. And that in turn fixes the module unloading bug as __fput() won't do module_put() on the last vCPU reference until the vCPU has been freed, e.g. if closing the vCPU file also puts the last reference to the KVM module. Note that kvm_check_async_pf_completion() may also take the work item off the completion queue and so also needs to flush the work queue, as the work will not be seen by kvm_clear_async_pf_completion_queue(). Waiting on the workqueue could theoretically delay a vCPU due to waiting for the work to complete, but that's a very, very small chance, and likely a very small delay. kvm_arch_async_page_present_queued() unconditionally makes a new request, i.e. will effectively delay entering the guest, so the remaining work is really just: trace_kvm_async_pf_completed(addr, cr2_or_gpa); __kvm_vcpu_wake_up(vcpu); mmput(mm); and mmput() can't drop the last reference to the page tables if the vCPU is still alive, i.e. the vCPU won't get stuck tearing down page tables. Add a helper to do the flushing, specifically to deal with "wakeup all" work items, as they aren't actually work items, i.e. are never placed in a workqueue. Trying to flush a bogus workqueue entry rightly makes __flush_work() complain (kudos to whoever added that sanity check). Note, commit 5f6de5cbebee ("KVM: Prevent module exit until all VMs are freed") *tried* to fix the module refcounting issue by having VMs grab a reference to the module, but that only made the bug slightly harder to hit as it gave async_pf_execute() a bit more time to complete before the KVM module could be unloaded. Fixes: af585b921e5d ("KVM: Halt vcpu if page it tries to access is swapped out") Cc: stable@vger.kernel.org Cc: David Matlack <dmatlack@google.com> Reviewed-by: Xu Yilun <yilun.xu@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Link: https://lore.kernel.org/r/20240110011533.503302-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
* | Merge tag 'kvm-x86-selftests-6.9' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini2024-03-1126-240/+802
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | KVM selftests changes for 6.9: - Add macros to reduce the amount of boilerplate code needed to write "simple" selftests, and to utilize selftest TAP infrastructure, which is especially beneficial for KVM selftests with multiple testcases. - Add basic smoke tests for SEV and SEV-ES, along with a pile of library support for handling private/encrypted/protected memory. - Fix benign bugs where tests neglect to close() guest_memfd files.
| * | KVM: selftests: Explicitly close guest_memfd files in some gmem testsDongli Zhang2024-03-052-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Explicitly close() guest_memfd files in various guest_memfd and private_mem_conversions tests, there's no reason to keep the files open until the test exits. Fixes: 8a89efd43423 ("KVM: selftests: Add basic selftest for guest_memfd()") Fixes: 43f623f350ce ("KVM: selftests: Add x86-only selftest for private memory conversions") Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Link: https://lore.kernel.org/r/20240227015716.27284-1-dongli.zhang@oracle.com [sean: massage changelog] Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Add a basic SEV-ES smoke testSean Christopherson2024-02-284-2/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Extend sev_smoke_test to also run a minimal SEV-ES smoke test so that it's possible to test KVM's unique VMRUN=>#VMEXIT path for SEV-ES guests without needing a full blown SEV-ES capable VM, which requires a rather absurd amount of properly configured collateral. Punt on proper GHCB and ucall support, and instead use the GHCB MSR protocol to signal test completion. The most important thing at this point is to have _any_ kind of testing of KVM's __svm_sev_es_vcpu_run(). Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Michael Roth <michael.roth@amd.com> Cc: Peter Gonda <pgonda@google.com> Cc: Carlos Bilbao <carlos.bilbao@amd.com> Tested-by: Carlos Bilbao <carlos.bilbao@amd.com> Link: https://lore.kernel.org/r/20240223004258.3104051-12-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Add a basic SEV smoke testPeter Gonda2024-02-282-0/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a basic smoke test for SEV guests to verify that KVM can launch an SEV guest and run a few instructions without exploding. To verify that SEV is indeed enabled, assert that SEV is reported as enabled in MSR_AMD64_SEV, a.k.a. SEV_STATUS, which cannot be intercepted by KVM (architecturally enforced). Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Vishal Annapurve <vannapurve@google.com> Cc: Ackerly Tng <ackerleytng@google.com> cc: Andrew Jones <andrew.jones@linux.dev> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Michael Roth <michael.roth@amd.com> Suggested-by: Michael Roth <michael.roth@amd.com> Tested-by: Carlos Bilbao <carlos.bilbao@amd.com> Signed-off-by: Peter Gonda <pgonda@google.com> [sean: rename to "sev_smoke_test"] Link: https://lore.kernel.org/r/20240223004258.3104051-11-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Use the SEV library APIs in the intra-host migration testSean Christopherson2024-02-281-42/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Port the existing intra-host SEV(-ES) migration test to the recently added SEV library, which handles much of the boilerplate needed to create and configure SEV guests. Tested-by: Carlos Bilbao <carlos.bilbao@amd.com> Link: https://lore.kernel.org/r/20240223004258.3104051-10-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Add library for creating and interacting with SEV guestsPeter Gonda2024-02-287-0/+244
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a library/APIs for creating and interfacing with SEV guests, all of which need some amount of common functionality, e.g. an open file handle for the SEV driver (/dev/sev), ioctl() wrappers to pass said file handle to KVM, tracking of the C-bit, etc. Add an x86-specific hook to initialize address properties, a.k.a. the location of the C-bit. An arch specific hook is rather gross, but x86 already has a dedicated #ifdef-protected kvm_get_cpu_address_width() hook, i.e. the ugliest code already exists. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Vishal Annapurve <vannapurve@google.com> Cc: Ackerly Tng <ackerleytng@google.com> cc: Andrew Jones <andrew.jones@linux.dev> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Michael Roth <michael.roth@amd.com> Tested-by: Carlos Bilbao <carlos.bilbao@amd.com> Originally-by: Michael Roth <michael.roth@amd.com> Signed-off-by: Peter Gonda <pgonda@google.com> Co-developed-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20240223004258.3104051-9-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Allow tagging protected memory in guest page tablesPeter Gonda2024-02-287-1/+86
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for tagging and untagging guest physical address, e.g. to allow x86's SEV and TDX guests to embed shared vs. private information in the GPA. SEV (encryption, a.k.a. C-bit) and TDX (shared, a.k.a. S-bit) steal bits from the guest's physical address space that is consumed by the CPU metadata, i.e. effectively aliases the "real" GPA. Implement generic "tagging" so that the shared vs. private metadata can be managed by x86 without bleeding too many details into common code. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Vishal Annapurve <vannapurve@google.com> Cc: Ackerly Tng <ackerleytng@google.com> cc: Andrew Jones <andrew.jones@linux.dev> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Michael Roth <michael.roth@amd.com> Tested-by: Carlos Bilbao <carlos.bilbao@amd.com> Originally-by: Michael Roth <michael.roth@amd.com> Signed-off-by: Peter Gonda <pgonda@google.com> Link: https://lore.kernel.org/r/20240223004258.3104051-8-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Explicitly ucall pool from shared memoryPeter Gonda2024-02-281-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocate the common ucall pool using vm_vaddr_alloc_shared() so that the ucall structures will be placed in shared (unencrypted) memory for VMs with support for protected (encrypted) memory, e.g. x86's SEV. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Vishal Annapurve <vannapurve@google.com> Cc: Ackerly Tng <ackerleytng@google.com> cc: Andrew Jones <andrew.jones@linux.dev> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Michael Roth <michael.roth@amd.com> Tested-by: Carlos Bilbao <carlos.bilbao@amd.com> Signed-off-by: Peter Gonda <pgonda@google.com> [sean: massage changelog] Link: https://lore.kernel.org/r/20240223004258.3104051-7-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Add support for protected vm_vaddr_* allocationsMichael Roth2024-02-282-5/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Test programs may wish to allocate shared vaddrs for things like sharing memory with the guest. Since protected vms will have their memory encrypted by default an interface is needed to explicitly request shared pages. Implement this by splitting the common code out from vm_vaddr_alloc() and introducing a new vm_vaddr_alloc_shared(). Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Vishal Annapurve <vannapurve@google.com> Cc: Ackerly Tng <ackerleytng@google.com> cc: Andrew Jones <andrew.jones@linux.dev> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Michael Roth <michael.roth@amd.com> Reviewed-by: Itaru Kitayama <itaru.kitayama@fujitsu.com> Tested-by: Carlos Bilbao <carlos.bilbao@amd.com> Signed-off-by: Michael Roth <michael.roth@amd.com> Signed-off-by: Peter Gonda <pgonda@google.com> Link: https://lore.kernel.org/r/20240223004258.3104051-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Add support for allocating/managing protected guest memoryPeter Gonda2024-02-282-6/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for differentiating between protected (a.k.a. private, a.k.a. encrypted) memory and normal (a.k.a. shared) memory for VMs that support protected guest memory, e.g. x86's SEV. Provide and manage a common bitmap for tracking whether a given physical page resides in protected memory, as support for protected memory isn't x86 specific, i.e. adding a arch hook would be a net negative now, and in the future. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Vishal Annapurve <vannapurve@google.com> Cc: Ackerley Tng <ackerleytng@google.com> cc: Andrew Jones <andrew.jones@linux.dev> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Michael Roth <michael.roth@amd.com> Reviewed-by: Itaru Kitayama <itaru.kitayama@fujitsu.com> Tested-by: Carlos Bilbao <carlos.bilbao@amd.com> Originally-by: Michael Roth <michael.roth@amd.com> Signed-off-by: Peter Gonda <pgonda@google.com> Co-developed-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20240223004258.3104051-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Add a macro to iterate over a sparsebit rangeAckerley Tng2024-02-281-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add sparsebit_for_each_set_range() to allow iterator over a range of set bits in a range. This will be used by x86 SEV guests to process protected physical pages (each such page needs to be encrypted _after_ being "added" to the VM). Tested-by: Carlos Bilbao <carlos.bilbao@amd.com> Signed-off-by: Ackerley Tng <ackerleytng@google.com> [sean: split to separate patch] Link: https://lore.kernel.org/r/20240223004258.3104051-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Make sparsebit structs const where appropriateMichael Roth2024-02-282-42/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make all sparsebit struct pointers "const" where appropriate. This will allow adding a bitmap to track protected/encrypted physical memory that tests can access in a read-only fashion. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Vishal Annapurve <vannapurve@google.com> Cc: Ackerley Tng <ackerleytng@google.com> Cc: Andrew Jones <andrew.jones@linux.dev> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Michael Roth <michael.roth@amd.com> Tested-by: Carlos Bilbao <carlos.bilbao@amd.com> Signed-off-by: Michael Roth <michael.roth@amd.com> Signed-off-by: Peter Gonda <pgonda@google.com> [sean: massage changelog] Link: https://lore.kernel.org/r/20240223004258.3104051-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Extend VM creation's @shape to allow control of VM subtypeSean Christopherson2024-02-282-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Carve out space in the @shape passed to the various VM creation helpers to allow using the shape to control the subtype of VM, e.g. to identify x86's SEV VMs (which are "regular" VMs as far as KVM is concerned). Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Vishal Annapurve <vannapurve@google.com> Cc: Ackerley Tng <ackerleytng@google.com> Cc: Andrew Jones <andrew.jones@linux.dev> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Michael Roth <michael.roth@amd.com> Tested-by: Carlos Bilbao <carlos.bilbao@amd.com> Link: https://lore.kernel.org/r/20240223004258.3104051-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: x86: Use TAP interface in the userspace_msr_exit testThomas Huth2024-02-281-39/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the kselftest_harness.h interface in this test to get TAP output, so that it is easier for the user to see what the test is doing. Signed-off-by: Thomas Huth <thuth@redhat.com> Link: https://lore.kernel.org/r/20240208204844.119326-9-thuth@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: x86: Use TAP interface in the vmx_pmu_caps testThomas Huth2024-02-281-40/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the kvm_test_harness.h interface in this test to get TAP output, so that it is easier for the user to see what the test is doing. Signed-off-by: Thomas Huth <thuth@redhat.com> Link: https://lore.kernel.org/r/20240208204844.119326-8-thuth@redhat.com [sean: make host_cap static] Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: x86: Use TAP interface in the fix_hypercall testThomas Huth2024-02-281-9/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the kvm_test_harness.h interface in this test to get TAP output, so that it is easier for the user to see what the test is doing. Signed-off-by: Thomas Huth <thuth@redhat.com> Link: https://lore.kernel.org/r/20240208204844.119326-7-thuth@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: x86: Use TAP interface in the sync_regs testThomas Huth2024-02-281-26/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sync_regs test currently does not have any output (unless one of the TEST_ASSERT statement fails), so it's hard to say for a user whether a certain new sub-test has been included in the binary or not. Let's make this a little bit more user-friendly and include some TAP output via the kselftest_harness.h / kvm_test_harness.h interface. To be able to use the interface, we have to break up the huge main() function here in more fine grained parts - then we can use the new KVM_ONE_VCPU_TEST() macro to define the individual tests. Since these are run with a separate VM now, we have also to make sure to create the expected state at the beginning of each test, so some parts grow a little bit - which should be OK considering that the individual tests are more self-contained now. Suggested-by: David Matlack <dmatlack@google.com> Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Thomas Huth <thuth@redhat.com> Link: https://lore.kernel.org/r/20240208204844.119326-6-thuth@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Add a macro to define a test with one vcpuThomas Huth2024-02-281-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Most tests are currently not giving any proper output for the user to see how much sub-tests have already been run, or whether new sub-tests are part of a binary or not. So it would be good to support TAP output in the KVM selftests. There is already a nice framework for this in the kselftest_harness.h header which we can use. But since we also need a vcpu in most KVM selftests, it also makes sense to introduce our own wrapper around this which takes care of creating a VM with one vcpu, so we don't have to repeat this boilerplate in each and every test. Thus let's introduce a KVM_ONE_VCPU_TEST() macro here which takes care of this. Suggested-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/all/Y2v+B3xxYKJSM%2FfH@google.com/ Signed-off-by: Thomas Huth <thuth@redhat.com> Link: https://lore.kernel.org/r/20240208204844.119326-5-thuth@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Move setting a vCPU's entry point to a dedicated APISean Christopherson2024-02-285-22/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Extract the code to set a vCPU's entry point out of vm_arch_vcpu_add() and into a new API, vcpu_arch_set_entry_point(). Providing a separate API will allow creating a KVM selftests hardness that can handle tests that use different entry points for sub-tests, whereas *requiring* the entry point to be specified at vCPU creation makes it difficult to create a generic harness, e.g. the boilerplate setup/teardown can't easily create and destroy the VM and vCPUs. Signed-off-by: Thomas Huth <thuth@redhat.com> Link: https://lore.kernel.org/r/20240208204844.119326-4-thuth@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: x86: sync_regs_test: Get regs structure before modifying itThomas Huth2024-02-261-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The regs structure just accidentally contains the right values from the previous test in the spot where we want to change rbx. It's cleaner if we properly initialize the structure here before using it. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Thomas Huth <thuth@redhat.com> Link: https://lore.kernel.org/r/20240208204844.119326-3-thuth@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: x86: sync_regs_test: Use vcpu_run() where appropriateThomas Huth2024-02-261-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the spots where we are expecting a successful run, we should use vcpu_run() instead of _vcpu_run() to make sure that the run did not fail. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Thomas Huth <thuth@redhat.com> Link: https://lore.kernel.org/r/20240208204844.119326-2-thuth@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>