summaryrefslogtreecommitdiffstats
path: root/arch
Commit message (Collapse)AuthorAgeFilesLines
* sh: prevent warnings when using iounmapSam Ravnborg2019-07-311-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 733f0025f0fb43e382b84db0930ae502099b7e62 ] When building drm/exynos for sh, as part of an allmodconfig build, the following warning triggered: exynos7_drm_decon.c: In function `decon_remove': exynos7_drm_decon.c:769:24: warning: unused variable `ctx' struct decon_context *ctx = dev_get_drvdata(&pdev->dev); The ctx variable is only used as argument to iounmap(). In sh - allmodconfig CONFIG_MMU is not defined so it ended up in: \#define __iounmap(addr) do { } while (0) \#define iounmap __iounmap Fix the warning by introducing a static inline function for iounmap. This is similar to several other architectures. Link: http://lkml.kernel.org/r/20190622114208.24427-1-sam@ravnborg.org Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: Inki Dae <inki.dae@samsung.com> Cc: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc/eeh: Handle hugepages in ioremap spaceOliver O'Halloran2019-07-311-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 33439620680be5225c1b8806579a291e0d761ca0 ] In commit 4a7b06c157a2 ("powerpc/eeh: Handle hugepages in ioremap space") support for using hugepages in the vmalloc and ioremap areas was enabled for radix. Unfortunately this broke EEH MMIO error checking. Detection works by inserting a hook which checks the results of the ioreadXX() set of functions. When a read returns a 0xFFs response we need to check for an error which we do by mapping the (virtual) MMIO address back to a physical address, then mapping physical address to a PCI device via an interval tree. When translating virt -> phys we currently assume the ioremap space is only populated by PAGE_SIZE mappings. If a hugepage mapping is found we emit a WARN_ON(), but otherwise handles the check as though a normal page was found. In pathalogical cases such as copying a buffer containing a lot of 0xFFs from BAR memory this can result in the system not booting because it's too busy printing WARN_ON()s. There's no real reason to assume huge pages can't be present and we're prefectly capable of handling them, so do that. Fixes: 4a7b06c157a2 ("powerpc/eeh: Handle hugepages in ioremap space") Reported-by: Sachin Sant <sachinp@linux.vnet.ibm.com> Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Tested-by: Sachin Sant <sachinp@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190710150517.27114-1-oohall@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc/boot: add {get, put}_unaligned_be32 to xz_config.hMasahiro Yamada2019-07-311-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 9e005b761e7ad153dcf40a6cba1d681fe0830ac6 ] The next commit will make the way of passing CONFIG options more robust. Unfortunately, it would uncover another hidden issue; without this commit, skiroot_defconfig would be broken like this: | WRAP arch/powerpc/boot/zImage.pseries | arch/powerpc/boot/wrapper.a(decompress.o): In function `bcj_powerpc.isra.10': | decompress.c:(.text+0x720): undefined reference to `get_unaligned_be32' | decompress.c:(.text+0x7a8): undefined reference to `put_unaligned_be32' | make[1]: *** [arch/powerpc/boot/Makefile;383: arch/powerpc/boot/zImage.pseries] Error 1 | make: *** [arch/powerpc/Makefile;295: zImage] Error 2 skiroot_defconfig is the only defconfig that enables CONFIG_KERNEL_XZ for ppc, which has never been correctly built before. I figured out the root cause in lib/decompress_unxz.c: | #ifdef CONFIG_PPC | # define XZ_DEC_POWERPC | #endif CONFIG_PPC is undefined here in the ppc bootwrapper because autoconf.h is not included except by arch/powerpc/boot/serial.c XZ_DEC_POWERPC is not defined, therefore, bcj_powerpc() is not compiled for the bootwrapper. With the next commit passing CONFIG_PPC correctly, we would realize that {get,put}_unaligned_be32 was missing. Unlike the other decompressors, the ppc bootwrapper duplicates all the necessary helpers in arch/powerpc/boot/. The other architectures define __KERNEL__ and pull in helpers for building the decompressors. If ppc bootwrapper had defined __KERNEL__, lib/xz/xz_private.h would have included <asm/unaligned.h>: | #ifdef __KERNEL__ | # include <linux/xz.h> | # include <linux/kernel.h> | # include <asm/unaligned.h> However, doing so would cause tons of definition conflicts since the bootwrapper has duplicated everything. I just added copies of {get,put}_unaligned_be32, following the bootwrapper coding convention. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190705100144.28785-1-yamada.masahiro@socionext.com Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc/irq: Don't WARN continuously in arch_local_irq_restore()Michael Ellerman2019-07-311-3/+3
| | | | | | | | | | | | | | | | | | [ Upstream commit 0fc12c022ad25532b66bf6f6c818ee1c1d63e702 ] When CONFIG_PPC_IRQ_SOFT_MASK_DEBUG is enabled (uncommon), we have a series of WARN_ON's in arch_local_irq_restore(). These are "should never happen" conditions, but if they do happen they can flood the console and render the system unusable. So switch them to WARN_ON_ONCE(). Fixes: e2b36d591720 ("powerpc/64: Don't trace code that runs with the soft irq mask unreconciled") Fixes: 9b81c0211c24 ("powerpc/64s: make PACA_IRQ_HARD_DIS track MSR[EE] closely") Fixes: 7c0482e3d055 ("powerpc/irq: Fix another case of lazy IRQ state getting out of sync") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190708061046.7075-1-mpe@ellerman.id.au Signed-off-by: Sasha Levin <sashal@kernel.org>
* KVM: nVMX: Stash L1's CR3 in vmcs01.GUEST_CR3 on nested entry w/o EPTSean Christopherson2019-07-312-22/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit f087a02941feacf7d6f097522bc67c602fda18e6 ] KVM does not have 100% coverage of VMX consistency checks, i.e. some checks that cause VM-Fail may only be detected by hardware during a nested VM-Entry. In such a case, KVM must restore L1's state to the pre-VM-Enter state as L2's state has already been loaded into KVM's software model. L1's CR3 and PDPTRs in particular are loaded from vmcs01.GUEST_*. But when EPT is disabled, the associated fields hold KVM's shadow values, not L1's "real" values. Fortunately, when EPT is disabled the PDPTRs come from memory, i.e. are not cached in the VMCS. Which leaves CR3 as the sole anomaly. A previously applied workaround to handle CR3 was to force nested early checks if EPT is disabled: commit 2b27924bb1d48 ("KVM: nVMX: always use early vmcs check when EPT is disabled") Forcing nested early checks is undesirable as doing so adds hundreds of cycles to every nested VM-Entry. Rather than take this performance hit, handle CR3 by overwriting vmcs01.GUEST_CR3 with L1's CR3 during nested VM-Entry when EPT is disabled *and* nested early checks are disabled. By stuffing vmcs01.GUEST_CR3, nested_vmx_restore_host_state() will naturally restore the correct vcpu->arch.cr3 from vmcs01.GUEST_CR3. These shenanigans work because nested_vmx_restore_host_state() does a full kvm_mmu_reset_context(), i.e. unloads the current MMU, which guarantees vmcs01.GUEST_CR3 will be rewritten with a new shadow CR3 prior to re-entering L1. vcpu->arch.root_mmu.root_hpa is set to INVALID_PAGE via: nested_vmx_restore_host_state() -> kvm_mmu_reset_context() -> kvm_mmu_unload() -> kvm_mmu_free_roots() kvm_mmu_unload() has WARN_ON(root_hpa != INVALID_PAGE), i.e. we can bank on 'root_hpa == INVALID_PAGE' unless the implementation of kvm_mmu_reset_context() is changed. On the way into L1, VMCS.GUEST_CR3 is guaranteed to be written (on a successful entry) via: vcpu_enter_guest() -> kvm_mmu_reload() -> kvm_mmu_load() -> kvm_mmu_load_cr3() -> vmx_set_cr3() Stuff vmcs01.GUEST_CR3 if and only if nested early checks are disabled as a "late" VM-Fail should never happen win that case (KVM WARNs), and the conditional write avoids the need to restore the correct GUEST_CR3 when nested_vmx_check_vmentry_hw() fails. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20190607185534.24368-1-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: assembler: Switch ESB-instruction with a vanilla nop if !ARM64_HAS_RASJames Morse2019-07-311-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 2b68a2a963a157f024c67c0697b16f5f792c8a35 ] The ESB-instruction is a nop on CPUs that don't implement the RAS extensions. This lets us use it in places like the vectors without having to use alternatives. If someone disables CONFIG_ARM64_RAS_EXTN, this instruction still has its RAS extensions behaviour, but we no longer read DISR_EL1 as this register does depend on alternatives. This could go wrong if we want to synchronize an SError from a KVM guest. On a CPU that has the RAS extensions, but the KConfig option was disabled, we consume the pending SError with no chance of ever reading it. Hide the ESB-instruction behind the CONFIG_ARM64_RAS_EXTN option, outputting a regular nop if the feature has been disabled. Reported-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc/mm: Handle page table allocation failuresAneesh Kumar K.V2019-07-311-0/+8
| | | | | | | | | | | | [ Upstream commit 2230ebf6e6dd0b7751e2921b40f6cfe34f09bb16 ] This fixes kernel crash that arises due to not handling page table allocation failures while allocating hugetlb page table. Fixes: e2b3d202d1db ("powerpc: Switch 16GB and 16MB explicit hugepages to a different page table format") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc/mm: mark more tlb functions as __always_inlineMasahiro Yamada2019-07-312-17/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 6d3ca7e73642ce17398f4cd5df1780da4a1ccdaf ] With CONFIG_OPTIMIZE_INLINING enabled, Laura Abbott reported error with gcc 9.1.1: arch/powerpc/mm/book3s64/radix_tlb.c: In function '_tlbiel_pid': arch/powerpc/mm/book3s64/radix_tlb.c:104:2: warning: asm operand 3 probably doesn't match constraints 104 | asm volatile(PPC_TLBIEL(%0, %4, %3, %2, %1) | ^~~ arch/powerpc/mm/book3s64/radix_tlb.c:104:2: error: impossible constraint in 'asm' Fixing _tlbiel_pid() is enough to address the warning above, but I inlined more functions to fix all potential issues. To meet the "i" (immediate) constraint for the asm operands, functions propagating "ric" must be always inlined. Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING") Reported-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc/4xx/uic: clear pending interrupt after irq type/pol changeChristian Lamparter2019-07-311-0/+1
| | | | | | | | | | | | | | | | | [ Upstream commit 3ab3a0689e74e6aa5b41360bc18861040ddef5b1 ] When testing out gpio-keys with a button, a spurious interrupt (and therefore a key press or release event) gets triggered as soon as the driver enables the irq line for the first time. This patch clears any potential bogus generated interrupt that was caused by the switching of the associated irq's type and polarity. Signed-off-by: Christian Lamparter <chunkeey@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc: silence a -Wcast-function-type warning in dawr_write_file_boolMathieu Malaterre2019-07-311-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 548c54acba5bd1388d50727a9a126a42d0cd4ad0 ] In commit c1fe190c0672 ("powerpc: Add force enable of DAWR on P9 option") the following piece of code was added: smp_call_function((smp_call_func_t)set_dawr, &null_brk, 0); Since GCC 8 this triggers the following warning about incompatible function types: arch/powerpc/kernel/hw_breakpoint.c:408:21: error: cast between incompatible function types from 'int (*)(struct arch_hw_breakpoint *)' to 'void (*)(void *)' [-Werror=cast-function-type] Since the warning is there for a reason, and should not be hidden behind a cast, provide an intermediate callback function to avoid the warning. Fixes: c1fe190c0672 ("powerpc: Add force enable of DAWR on P9 option") Suggested-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Mathieu Malaterre <malat@debian.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* um: Silence lockdep complaint about mmap_semJohannes Berg2019-07-311-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 80bf6ceaf9310b3f61934c69b382d4912deee049 ] When we get into activate_mm(), lockdep complains that we're doing something strange: WARNING: possible circular locking dependency detected 5.1.0-10252-gb00152307319-dirty #121 Not tainted ------------------------------------------------------ inside.sh/366 is trying to acquire lock: (____ptrval____) (&(&p->alloc_lock)->rlock){+.+.}, at: flush_old_exec+0x703/0x8d7 but task is already holding lock: (____ptrval____) (&mm->mmap_sem){++++}, at: flush_old_exec+0x6c5/0x8d7 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&mm->mmap_sem){++++}: [...] __lock_acquire+0x12ab/0x139f lock_acquire+0x155/0x18e down_write+0x3f/0x98 flush_old_exec+0x748/0x8d7 load_elf_binary+0x2ca/0xddb [...] -> #0 (&(&p->alloc_lock)->rlock){+.+.}: [...] __lock_acquire+0x12ab/0x139f lock_acquire+0x155/0x18e _raw_spin_lock+0x30/0x83 flush_old_exec+0x703/0x8d7 load_elf_binary+0x2ca/0xddb [...] other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&mm->mmap_sem); lock(&(&p->alloc_lock)->rlock); lock(&mm->mmap_sem); lock(&(&p->alloc_lock)->rlock); *** DEADLOCK *** 2 locks held by inside.sh/366: #0: (____ptrval____) (&sig->cred_guard_mutex){+.+.}, at: __do_execve_file+0x12d/0x869 #1: (____ptrval____) (&mm->mmap_sem){++++}, at: flush_old_exec+0x6c5/0x8d7 stack backtrace: CPU: 0 PID: 366 Comm: inside.sh Not tainted 5.1.0-10252-gb00152307319-dirty #121 Stack: [...] Call Trace: [<600420de>] show_stack+0x13b/0x155 [<6048906b>] dump_stack+0x2a/0x2c [<6009ae64>] print_circular_bug+0x332/0x343 [<6009c5c6>] check_prev_add+0x669/0xdad [<600a06b4>] __lock_acquire+0x12ab/0x139f [<6009f3d0>] lock_acquire+0x155/0x18e [<604a07e0>] _raw_spin_lock+0x30/0x83 [<60151e6a>] flush_old_exec+0x703/0x8d7 [<601a8eb8>] load_elf_binary+0x2ca/0xddb [...] I think it's because in exec_mmap() we have down_read(&old_mm->mmap_sem); ... task_lock(tsk); ... activate_mm(active_mm, mm); (which does down_write(&mm->mmap_sem)) I'm not really sure why lockdep throws in the whole knowledge about the task lock, but it seems that old_mm and mm shouldn't ever be the same (and it doesn't deadlock) so tell lockdep that they're different. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc/rtas: retry when cpu offline races with suspend/migrationNathan Lynch2019-07-311-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 9fb603050ffd94f8127df99c699cca2f575eb6a0 ] The protocol for suspending or migrating an LPAR requires all present processor threads to enter H_JOIN. So if we have threads offline, we have to temporarily bring them up. This can race with administrator actions such as SMT state changes. As of dfd718a2ed1f ("powerpc/rtas: Fix a potential race between CPU-Offline & Migration"), rtas_ibm_suspend_me() accounts for this, but errors out with -EBUSY for what almost certainly is a transient condition in any reasonable scenario. Callers of rtas_ibm_suspend_me() already retry when -EAGAIN is returned, and it is typical during a migration for that to happen repeatedly for several minutes polling the H_VASI_STATE hcall result before proceeding to the next stage. So return -EAGAIN instead of -EBUSY when this race is encountered. Additionally: logging this event is still appropriate but use pr_info instead of pr_err; and remove use of unlikely() while here as this is not a hot path at all. Fixes: dfd718a2ed1f ("powerpc/rtas: Fix a potential race between CPU-Offline & Migration") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc/xmon: Fix disabling tracing while in xmonNaveen N. Rao2019-07-311-2/+4
| | | | | | | | | | | | | | | | | | | | [ Upstream commit aaf06665f7ea3ee9f9754e16c1a507a89f1de5b1 ] Commit ed49f7fd6438d ("powerpc/xmon: Disable tracing when entering xmon") added code to disable recording trace entries while in xmon. The commit introduced a variable 'tracing_enabled' to record if tracing was enabled on xmon entry, and used this to conditionally enable tracing during exit from xmon. However, we are not checking the value of 'fromipi' variable in xmon_core() when setting 'tracing_enabled'. Due to this, when secondary cpus enter xmon, they will see tracing as being disabled already and tracing won't be re-enabled on exit. Fix the same. Fixes: ed49f7fd6438d ("powerpc/xmon: Disable tracing when entering xmon") Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc/cacheflush: fix variable set but not usedQian Cai2019-07-311-2/+5
| | | | | | | | | | | | | | | | | [ Upstream commit 04db3ede40ae4fc23a5c4237254c4a53bbe4c1f2 ] The powerpc's flush_cache_vmap() is defined as a macro and never use both of its arguments, so it will generate a compilation warning, lib/ioremap.c: In function 'ioremap_page_range': lib/ioremap.c:203:16: warning: variable 'start' set but not used [-Wunused-but-set-variable] Fix it by making it an inline function. Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc/pci/of: Fix OF flags parsing for 64bit BARsAlexey Kardashevskiy2019-07-311-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit df5be5be8735ef2ae80d5ae1f2453cd81a035c4b ] When the firmware does PCI BAR resource allocation, it passes the assigned addresses and flags (prefetch/64bit/...) via the "reg" property of a PCI device device tree node so the kernel does not need to do resource allocation. The flags are stored in resource::flags - the lower byte stores PCI_BASE_ADDRESS_SPACE/etc bits and the other bytes are IORESOURCE_IO/etc. Some flags from PCI_BASE_ADDRESS_xxx and IORESOURCE_xxx are duplicated, such as PCI_BASE_ADDRESS_MEM_PREFETCH/PCI_BASE_ADDRESS_MEM_TYPE_64/etc. When parsing the "reg" property, we copy the prefetch flag but we skip on PCI_BASE_ADDRESS_MEM_TYPE_64 which leaves the flags out of sync. The missing IORESOURCE_MEM_64 flag comes into play under 2 conditions: 1. we remove PCI_PROBE_ONLY for pseries (by hacking pSeries_setup_arch() or by passing "/chosen/linux,pci-probe-only"); 2. we request resource alignment (by passing pci=resource_alignment= via the kernel cmd line to request PAGE_SIZE alignment or defining ppc_md.pcibios_default_alignment which returns anything but 0). Note that the alignment requests are ignored if PCI_PROBE_ONLY is enabled. With 1) and 2), the generic PCI code in the kernel unconditionally decides to: - reassign the BARs in pci_specified_resource_alignment() (works fine) - write new BARs to the device - this fails for 64bit BARs as the generic code looks at IORESOURCE_MEM_64 (not set) and writes only lower 32bits of the BAR and leaves the upper 32bit unmodified which breaks BAR mapping in the hypervisor. This fixes the issue by copying the flag. This is useful if we want to enforce certain BAR alignment per platform as handling subpage sized BARs is proven to cause problems with hotplug (SLOF already aligns BARs to 64k). Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: Sam Bobroff <sbobroff@linux.ibm.com> Reviewed-by: Oliver O'Halloran <oohall@gmail.com> Reviewed-by: Shawn Anastasio <shawn@anastas.io> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* kvm: vmx: segment limit check: use access lengthEugene Korenevsky2019-07-313-14/+19
| | | | | | | | | | | | | | [ Upstream commit fdb28619a8f033c13f5d9b9e8b5536bb6e68a2c3 ] There is an imperfection in get_vmx_mem_address(): access length is ignored when checking the limit. To fix this, pass access length as a function argument. The access length is usually obvious since it is used by callers after get_vmx_mem_address() call, but for vmread/vmwrite it depends on the state of 64-bit mode. Signed-off-by: Eugene Korenevsky <ekorenevsky@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* KVM: nVMX: Intercept VMWRITEs to GUEST_{CS,SS}_AR_BYTESSean Christopherson2019-07-312-2/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit b643780562af5378ef7fe731c65b8f93e49c59c6 ] VMMs frequently read the guest's CS and SS AR bytes to detect 64-bit mode and CPL respectively, but effectively never write said fields once the VM is initialized. Intercepting VMWRITEs for the two fields saves ~55 cycles in copy_shadow_to_vmcs12(). Because some Intel CPUs, e.g. Haswell, drop the reserved bits of the guest access rights fields on VMWRITE, exposing the fields to L1 for VMREAD but not VMWRITE leads to inconsistent behavior between L1 and L2. On hardware that drops the bits, L1 will see the stripped down value due to reading the value from hardware, while L2 will see the full original value as stored by KVM. To avoid such an inconsistency, emulate the behavior on all CPUS, but only for intercepted VMWRITEs so as to avoid introducing pointless latency into copy_shadow_to_vmcs12(), e.g. if the emulation were added to vmcs12_write_any(). Since the AR_BYTES emulation is done only for intercepted VMWRITE, if a future patch (re)exposed AR_BYTES for both VMWRITE and VMREAD, then KVM would end up with incosistent behavior on pre-Haswell hardware, e.g. KVM would drop the reserved bits on intercepted VMWRITE, but direct VMWRITE to the shadow VMCS would not drop the bits. Add a WARN in the shadow field initialization to detect any attempt to expose an AR_BYTES field without updating vmcs12_write_any(). Note, emulation of the AR_BYTES reserved bit behavior is based on a patch[1] from Jim Mattson that applied the emulation to all writes to vmcs12 so that live migration across different generations of hardware would not introduce divergent behavior. But given that live migration of nested state has already been enabled, that ship has sailed (not to mention that no sane VMM will be affected by this behavior). [1] https://patchwork.kernel.org/patch/10483321/ Cc: Jim Mattson <jmattson@google.com> Cc: Liran Alon <liran.alon@oracle.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* kvm: vmx: fix limit checking in get_vmx_mem_address()Eugene Korenevsky2019-07-311-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit c1a9acbc5295e278d788e9f7510f543bc9864fa2 ] Intel SDM vol. 3, 5.3: The processor causes a general-protection exception (or, if the segment is SS, a stack-fault exception) any time an attempt is made to access the following addresses in a segment: - A byte at an offset greater than the effective limit - A word at an offset greater than the (effective-limit – 1) - A doubleword at an offset greater than the (effective-limit – 3) - A quadword at an offset greater than the (effective-limit – 7) Therefore, the generic limit checking error condition must be exn = (off > limit + 1 - access_len) = (off + access_len - 1 > limit) but not exn = (off + access_len > limit) as for now. Also avoid integer overflow of `off` at 32-bit KVM by casting it to u64. Note: access length is currently sizeof(u64) which is incorrect. This will be fixed in the subsequent patch. Signed-off-by: Eugene Korenevsky <ekorenevsky@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc/pseries/mobility: prevent cpu hotplug during DT updateNathan Lynch2019-07-311-0/+9
| | | | | | | | | | | | | | | | [ Upstream commit e59a175faa8df9d674247946f2a5a9c29c835725 ] CPU online/offline code paths are sensitive to parts of the device tree (various cpu node properties, cache nodes) that can be changed as a result of a migration. Prevent CPU hotplug while the device tree potentially is inconsistent. Fixes: 410bccf97881 ("powerpc/pseries: Partition migration in the kernel") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* Revert "kvm: x86: Use task structs fpu field for user"Paolo Bonzini2019-07-282-5/+6
| | | | | | | | | | | | | | | commit ec269475cba7bcdd1eb8fdf8e87f4c6c81a376fe upstream. This reverts commit 240c35a3783ab9b3a0afaba0dde7291295680a6b ("kvm: x86: Use task structs fpu field for user", 2018-11-06). The commit is broken and causes QEMU's FPU state to be destroyed when KVM_RUN is preempted. Fixes: 240c35a3783a ("kvm: x86: Use task structs fpu field for user") Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KVM: nVMX: Clear pending KVM_REQ_GET_VMCS12_PAGES when leaving nestedJan Kiszka2019-07-281-0/+2
| | | | | | | | | | | | | | | | | | commit cf64527bb33f6cec2ed50f89182fc4688d0056b6 upstream. Letting this pend may cause nested_get_vmcs12_pages to run against an invalid state, corrupting the effective vmcs of L1. This was triggerable in QEMU after a guest corruption in L2, followed by a L1 reset. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Reviewed-by: Liran Alon <liran.alon@oracle.com> Cc: stable@vger.kernel.org Fixes: 7f7f1ba33cf2 ("KVM: x86: do not load vmcs12 pages while still in SMM") Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KVM: nVMX: do not use dangling shadow VMCS after guest resetPaolo Bonzini2019-07-281-1/+7
| | | | | | | | | | | | | | | | | | | commit 88dddc11a8d6b09201b4db9d255b3394d9bc9e57 upstream. If a KVM guest is reset while running a nested guest, free_nested will disable the shadow VMCS execution control in the vmcs01. However, on the next KVM_RUN vmx_vcpu_run would nevertheless try to sync the VMCS12 to the shadow VMCS which has since been freed. This causes a vmptrld of a NULL pointer on my machime, but Jan reports the host to hang altogether. Let's see how much this trivial patch fixes. Reported-by: Jan Kiszka <jan.kiszka@siemens.com> Cc: Liran Alon <liran.alon@oracle.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* MIPS: lb60: Fix pin mappingsPaul Cercueil2019-07-281-8/+8
| | | | | | | | | | | | | | | | | | | | | | commit 1323c3b72a987de57141cabc44bf9cd83656bc70 upstream. The pin mappings introduced in commit 636f8ba67fb6 ("MIPS: JZ4740: Qi LB60: Add pinctrl configuration for several drivers") are completely wrong. The pinctrl driver name is incorrect, and the function and group fields are swapped. Fixes: 636f8ba67fb6 ("MIPS: JZ4740: Qi LB60: Add pinctrl configuration for several drivers") Cc: <stable@vger.kernel.org> Signed-off-by: Paul Cercueil <paul@crapouillou.net> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: od@zcrc.me Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc/pseries: Fix oops in hotplug memory notifierNathan Lynch2019-07-261-0/+3
| | | | | | | | | | | | | | | | | | | commit 0aa82c482ab2ece530a6f44897b63b274bb43c8e upstream. During post-migration device tree updates, we can oops in pseries_update_drconf_memory() if the source device tree has an ibm,dynamic-memory-v2 property and the destination has a ibm,dynamic_memory (v1) property. The notifier processes an "update" for the ibm,dynamic-memory property but it's really an add in this scenario. So make sure the old property object is there before dereferencing it. Fixes: 2b31e3aec1db ("powerpc/drmem: Add support for ibm, dynamic-memory-v2 property") Cc: stable@vger.kernel.org # v4.16+ Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc/pseries: Fix xive=off command lineGreg Kurz2019-07-262-2/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit a3bf9fbdad600b1e4335dd90979f8d6072e4f602 upstream. On POWER9, if the hypervisor supports XIVE exploitation mode, the guest OS will unconditionally requests for the XIVE interrupt mode even if XIVE was deactivated with the kernel command line xive=off. Later on, when the spapr XIVE init code handles xive=off, it disables XIVE and tries to fall back on the legacy mode XICS. This discrepency causes a kernel panic because the hypervisor is configured to provide the XIVE interrupt mode to the guest : kernel BUG at arch/powerpc/sysdev/xics/xics-common.c:135! ... NIP xics_smp_probe+0x38/0x98 LR xics_smp_probe+0x2c/0x98 Call Trace: xics_smp_probe+0x2c/0x98 (unreliable) pSeries_smp_probe+0x40/0xa0 smp_prepare_cpus+0x62c/0x6ec kernel_init_freeable+0x148/0x448 kernel_init+0x2c/0x148 ret_from_kernel_thread+0x5c/0x68 Look for xive=off during prom_init and don't ask for XIVE in this case. One exception though: if the host only supports XIVE, we still want to boot so we ignore xive=off. Similarly, have the spapr XIVE init code to looking at the interrupt mode negotiated during CAS, and ignore xive=off if the hypervisor only supports XIVE. Fixes: eac1e731b59e ("powerpc/xive: guest exploitation of the XIVE interrupt controller") Cc: stable@vger.kernel.org # v4.20 Reported-by: Pavithra R. Prakash <pavrampu@in.ibm.com> Signed-off-by: Greg Kurz <groug@kaod.org> Reviewed-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc/powernv: Fix stale iommu table base after VFIOAlexey Kardashevskiy2019-07-261-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 5636427d087a55842c1a199dfb839e6545d30e5d upstream. The powernv platform uses @dma_iommu_ops for non-bypass DMA. These ops need an iommu_table pointer which is stored in dev->archdata.iommu_table_base. It is initialized during pcibios_setup_device() which handles boot time devices. However when a device is taken from the system in order to pass it through, the default IOMMU table is destroyed but the pointer in a device is not updated; also when a device is returned back to the system, a new table pointer is not stored in dev->archdata.iommu_table_base either. So when a just returned device tries using IOMMU, it crashes on accessing stale iommu_table or its members. This calls set_iommu_table_base() when the default window is created. Note it used to be there before but was wrongly removed (see "fixes"). It did not appear before as these days most devices simply use bypass. This adds set_iommu_table_base(NULL) when a device is taken from the system to make it clear that IOMMU DMA cannot be used past that point. Fixes: c4e9d3c1e65a ("powerpc/powernv/pseries: Rework device adding to IOMMU groups") Cc: stable@vger.kernel.org # v5.0+ Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc/powernv/idle: Fix restore of SPRN_LDBAR for POWER9 stop state.Athira Rajeev2019-07-261-1/+1
| | | | | | | | | | | | | | | | | | | commit f5a9e488d62360c91c5770bd55a0b40e419a71ce upstream. commit 10d91611f426 ("powerpc/64s: Reimplement book3s idle code in C") reimplemented book3S code to pltform/powernv/idle.c. But when doing so missed to add the per-thread LDBAR update in the core_woken path of the power9_idle_stop(). Patch fixes the same. Fixes: 10d91611f426 ("powerpc/64s: Reimplement book3s idle code in C") Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190702105836.26695-1-maddy@linux.vnet.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc/powernv/npu: Fix reference leakGreg Kurz2019-07-261-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 02c5f5394918b9b47ff4357b1b18335768cd867d upstream. Since 902bdc57451c, get_pci_dev() calls pci_get_domain_bus_and_slot(). This has the effect of incrementing the reference count of the PCI device, as explained in drivers/pci/search.c: * Given a PCI domain, bus, and slot/function number, the desired PCI * device is located in the list of PCI devices. If the device is * found, its reference count is increased and this function returns a * pointer to its data structure. The caller must decrement the * reference count by calling pci_dev_put(). If no device is found, * %NULL is returned. Nothing was done to call pci_dev_put() and the reference count of GPU and NPU PCI devices rockets up. A natural way to fix this would be to teach the callers about the change, so that they call pci_dev_put() when done with the pointer. This turns out to be quite intrusive, as it affects many paths in npu-dma.c, pci-ioda.c and vfio_pci_nvlink2.c. Also, the issue appeared in 4.16 and some affected code got moved around since then: it would be problematic to backport the fix to stable releases. All that code never cared for reference counting anyway. Call pci_dev_put() from get_pci_dev() to revert to the previous behavior. Fixes: 902bdc57451c ("powerpc/powernv/idoa: Remove unnecessary pcidev from pci_dn") Cc: stable@vger.kernel.org # v4.16 Signed-off-by: Greg Kurz <groug@kaod.org> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc/watchpoint: Restore NV GPRs while returning from exceptionRavi Bangoria2019-07-261-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit f474c28fbcbe42faca4eb415172c07d76adcb819 upstream. powerpc hardware triggers watchpoint before executing the instruction. To make trigger-after-execute behavior, kernel emulates the instruction. If the instruction is 'load something into non-volatile register', exception handler should restore emulated register state while returning back, otherwise there will be register state corruption. eg, adding a watchpoint on a list can corrput the list: # cat /proc/kallsyms | grep kthread_create_list c00000000121c8b8 d kthread_create_list Add watchpoint on kthread_create_list->prev: # perf record -e mem:0xc00000000121c8c0 Run some workload such that new kthread gets invoked. eg, I just logged out from console: list_add corruption. next->prev should be prev (c000000001214e00), \ but was c00000000121c8b8. (next=c00000000121c8b8). WARNING: CPU: 59 PID: 309 at lib/list_debug.c:25 __list_add_valid+0xb4/0xc0 CPU: 59 PID: 309 Comm: kworker/59:0 Kdump: loaded Not tainted 5.1.0-rc7+ #69 ... NIP __list_add_valid+0xb4/0xc0 LR __list_add_valid+0xb0/0xc0 Call Trace: __list_add_valid+0xb0/0xc0 (unreliable) __kthread_create_on_node+0xe0/0x260 kthread_create_on_node+0x34/0x50 create_worker+0xe8/0x260 worker_thread+0x444/0x560 kthread+0x160/0x1a0 ret_from_kernel_thread+0x5c/0x70 List corruption happened because it uses 'load into non-volatile register' instruction: Snippet from __kthread_create_on_node: c000000000136be8: addis r29,r2,-19 c000000000136bec: ld r29,31424(r29) if (!__list_add_valid(new, prev, next)) c000000000136bf0: mr r3,r30 c000000000136bf4: mr r5,r28 c000000000136bf8: mr r4,r29 c000000000136bfc: bl c00000000059a2f8 <__list_add_valid+0x8> Register state from WARN_ON(): GPR00: c00000000059a3a0 c000007ff23afb50 c000000001344e00 0000000000000075 GPR04: 0000000000000000 0000000000000000 0000001852af8bc1 0000000000000000 GPR08: 0000000000000001 0000000000000007 0000000000000006 00000000000004aa GPR12: 0000000000000000 c000007ffffeb080 c000000000137038 c000005ff62aaa00 GPR16: 0000000000000000 0000000000000000 c000007fffbe7600 c000007fffbe7370 GPR20: c000007fffbe7320 c000007fffbe7300 c000000001373a00 0000000000000000 GPR24: fffffffffffffef7 c00000000012e320 c000007ff23afcb0 c000000000cb8628 GPR28: c00000000121c8b8 c000000001214e00 c000007fef5b17e8 c000007fef5b17c0 Watchpoint hit at 0xc000000000136bec. addis r29,r2,-19 => r29 = 0xc000000001344e00 + (-19 << 16) => r29 = 0xc000000001214e00 ld r29,31424(r29) => r29 = *(0xc000000001214e00 + 31424) => r29 = *(0xc00000000121c8c0) 0xc00000000121c8c0 is where we placed a watchpoint and thus this instruction was emulated by emulate_step. But because handle_dabr_fault did not restore emulated register state, r29 still contains stale value in above register state. Fixes: 5aae8a5370802 ("powerpc, hw_breakpoints: Implement hw_breakpoints for 64-bit server processors") Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: stable@vger.kernel.org # 2.6.36+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc/mm/32s: fix condition that is always trueAndreas Schwab2019-07-261-1/+1
| | | | | | | | | | | | | | commit 46c2478af610efb3212b8b08f74389d69899ef70 upstream. Move a misplaced paren that makes the condition always true. Fixes: 63b2bc619565 ("powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX") Cc: stable@vger.kernel.org # v5.1+ Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc/32s: fix suspend/resume when IBATs 4-7 are usedChristophe Leroy2019-07-262-13/+128
| | | | | | | | | | | | | | | | | | | | commit 6ecb78ef56e08d2119d337ae23cb951a640dc52d upstream. Previously, only IBAT1 and IBAT2 were used to map kernel linear mem. Since commit 63b2bc619565 ("powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX"), we may have all 8 BATs used for mapping kernel text. But the suspend/restore functions only save/restore BATs 0 to 3, and clears BATs 4 to 7. Make suspend and restore functions respectively save and reload the 8 BATs on CPUs having MMU_FTR_USE_HIGH_BATS feature. Reported-by: Andreas Schwab <schwab@linux-m68k.org> Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* parisc: Fix kernel panic due invalid values in IAOQ0 or IAOQ1Helge Deller2019-07-261-10/+18
| | | | | | | | | | | | | | | | | | | | | | commit 10835c854685393a921b68f529bf740fa7c9984d upstream. On parisc the privilege level of a process is stored in the lowest two bits of the instruction pointers (IAOQ0 and IAOQ1). On Linux we use privilege level 0 for the kernel and privilege level 3 for user-space. So userspace should not be allowed to modify IAOQ0 or IAOQ1 of a ptraced process to change it's privilege level to e.g. 0 to try to gain kernel privileges. This patch prevents such modifications by always setting the two lowest bits to one (which relates to privilege level 3 for user-space) if IAOQ0 or IAOQ1 are modified via ptrace calls in the native and compat ptrace paths. Link: https://bugs.gentoo.org/481768 Reported-by: Jeroen Roovers <jer@gentoo.org> Cc: <stable@vger.kernel.org> Tested-by: Rolf Eike Beer <eike-kernel@sf-tec.de> Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* parisc: Avoid kernel panic triggered by invalid kprobeHelge Deller2019-07-261-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 59a783dbc0d5fd6792aabff933055373b6dcbf2a upstream. When running gdb I was able to trigger this kernel panic: Kernel Fault: Code=26 (Data memory access rights trap) at addr 0000000000000060 CPU: 0 PID: 1401 Comm: gdb-crash Not tainted 5.2.0-rc7-64bit+ #1053 YZrvWESTHLNXBCVMcbcbcbcbOGFRQPDI PSW: 00001000000001000000000000001111 Not tainted r00-03 000000000804000f 0000000040dee1a0 0000000040c78cf0 00000000b8d50160 r04-07 0000000040d2b1a0 000000004360a098 00000000bbbe87b8 0000000000000003 r08-11 00000000fac20a70 00000000fac24160 00000000fac1bbe0 0000000000000000 r12-15 00000000fabfb79a 00000000fac244a4 0000000000010000 0000000000000001 r16-19 00000000bbbe87b8 00000000f8f02910 0000000000010034 0000000000000000 r20-23 00000000fac24630 00000000fac24630 000000006474e552 00000000fac1aa52 r24-27 0000000000000028 00000000bbbe87b8 00000000bbbe87b8 0000000040d2b1a0 r28-31 0000000000000000 00000000b8d501c0 00000000b8d501f0 0000000003424000 sr00-03 0000000000423000 0000000000000000 0000000000000000 0000000000423000 sr04-07 0000000000000000 0000000000000000 0000000000000000 0000000000000000 IASQ: 0000000000000000 0000000000000000 IAOQ: 0000000040c78cf0 0000000040c78cf4 IIR: 539f00c0 ISR: 0000000000000000 IOR: 0000000000000060 CPU: 0 CR30: 00000000b8d50000 CR31: 00000000d22345e2 ORIG_R28: 0000000040250798 IAOQ[0]: parisc_kprobe_ss_handler+0x58/0x170 IAOQ[1]: parisc_kprobe_ss_handler+0x5c/0x170 RP(r2): parisc_kprobe_ss_handler+0x58/0x170 Backtrace: [<0000000040206ff8>] handle_interruption+0x178/0xbb8 Kernel panic - not syncing: Kernel Fault Avoid this panic by checking the return value of kprobe_running() and skip kprobe if none is currently active. Cc: <stable@vger.kernel.org> # v5.2 Acked-by: Sven Schnelle <svens@stackframe.org> Tested-by: Rolf Eike Beer <eike-kernel@sf-tec.de> Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* parisc: Ensure userspace privilege for ptraced processes in regset functionsHelge Deller2019-07-261-1/+2
| | | | | | | | | | | | | | | | | | | | | commit 34c32fc603311a72cb558e5e337555434f64c27b upstream. On parisc the privilege level of a process is stored in the lowest two bits of the instruction pointers (IAOQ0 and IAOQ1). On Linux we use privilege level 0 for the kernel and privilege level 3 for user-space. So userspace should not be allowed to modify IAOQ0 or IAOQ1 of a ptraced process to change it's privilege level to e.g. 0 to try to gain kernel privileges. This patch prevents such modifications in the regset support functions by always setting the two lowest bits to one (which relates to privilege level 3 for user-space) if IAOQ0 or IAOQ1 are modified via ptrace regset calls. Link: https://bugs.gentoo.org/481768 Cc: <stable@vger.kernel.org> # v4.7+ Tested-by: Rolf Eike Beer <eike-kernel@sf-tec.de> Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* mm/nvdimm: add is_ioremap_addr and use that to check ioremap addressAneesh Kumar K.V2019-07-261-0/+14
| | | | | | | | | | | | | | | | | | | | | commit 9bd3bb6703d8c0a5fb8aec8e3287bd55b7341dcd upstream. Architectures like powerpc use different address range to map ioremap and vmalloc range. The memunmap() check used by the nvdimm layer was wrongly using is_vmalloc_addr() to check for ioremap range which fails for ppc64. This result in ppc64 not freeing the ioremap mapping. The side effect of this is an unbind failure during module unload with papr_scm nvdimm driver Link: http://lkml.kernel.org/r/20190701134038.14165-1-aneesh.kumar@linux.ibm.com Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Fixes: b5beae5e224f ("powerpc/pseries: Add driver for PAPR SCM regions") Cc: Dan Williams <dan.j.williams@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* x86/stacktrace: Prevent infinite loop in arch_stack_walk_user()Eiichi Tsukata2019-07-261-5/+3
| | | | | | | | | | | | | | | | | | commit cbf5b73d162b22e044fe0b7d51dcaa33be065253 upstream. arch_stack_walk_user() checks `if (fp == frame.next_fp)` to prevent a infinite loop by self reference but it's not enogh for circular reference. Once a lack of return address is found, there is no point to continue the loop, so break out. Fixes: 02b67518e2b1 ("tracing: add support for userspace stacktraces in tracing/iter_ctrl") Signed-off-by: Eiichi Tsukata <devel@etsukata.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lkml.kernel.org/r/20190711023501.963-1-devel@etsukata.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* perf/x86/amd/uncore: Set the thread mask for F17h L3 PMCsKim Phillips2019-07-261-4/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 2f217d58a8a086d3399fecce39fb358848e799c4 upstream. Fill in the L3 performance event select register ThreadMask bitfield, to enable per hardware thread accounting. Signed-off-by: Kim Phillips <kim.phillips@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: <stable@vger.kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Gary Hook <Gary.Hook@amd.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Liska <mliska@suse.cz> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Pu Wen <puwen@hygon.cn> Cc: Stephane Eranian <eranian@google.com> Cc: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: https://lkml.kernel.org/r/20190628215906.4276-2-kim.phillips@amd.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* perf/x86/amd/uncore: Do not set 'ThreadMask' and 'SliceMask' for non-L3 PMCsKim Phillips2019-07-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 16f4641166b10e199f0d7b68c2c5f004fef0bda3 upstream. The following commit: d7cbbe49a930 ("perf/x86/amd/uncore: Set ThreadMask and SliceMask for L3 Cache perf events") enables L3 PMC events for all threads and slices by writing 1's in 'ChL3PmcCfg' (L3 PMC PERF_CTL) register fields. Those bitfields overlap with high order event select bits in the Data Fabric PMC control register, however. So when a user requests raw Data Fabric events (-e amd_df/event=0xYYY/), the two highest order bits get inadvertently set, changing the counter select to events that don't exist, and for which no counts are read. This patch changes the logic to write the L3 masks only when dealing with L3 PMC counters. AMD Family 16h and below Northbridge (NB) counters were not affected. Signed-off-by: Kim Phillips <kim.phillips@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: <stable@vger.kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Gary Hook <Gary.Hook@amd.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Liska <mliska@suse.cz> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Pu Wen <puwen@hygon.cn> Cc: Stephane Eranian <eranian@google.com> Cc: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Fixes: d7cbbe49a930 ("perf/x86/amd/uncore: Set ThreadMask and SliceMask for L3 Cache perf events") Link: https://lkml.kernel.org/r/20190628215906.4276-1-kim.phillips@amd.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* perf/x86/intel: Fix spurious NMI on fixed counterKan Liang2019-07-261-5/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit e4557c1a46b0d32746bd309e1941914b5a6912b4 upstream. If a user first sample a PEBS event on a fixed counter, then sample a non-PEBS event on the same fixed counter on Icelake, it will trigger spurious NMI. For example: perf record -e 'cycles:p' -a perf record -e 'cycles' -a The error message for spurious NMI: [June 21 15:38] Uhhuh. NMI received for unknown reason 30 on CPU 2. [ +0.000000] Do you have a strange power saving mode enabled? [ +0.000000] Dazed and confused, but trying to continue The bug was introduced by the following commit: commit 6f55967ad9d9 ("perf/x86/intel: Fix race in intel_pmu_disable_event()") The commit moves the intel_pmu_pebs_disable() after intel_pmu_disable_fixed(), which returns immediately. The related bit of PEBS_ENABLE MSR will never be cleared for the fixed counter. Then a non-PEBS event runs on the fixed counter, but the bit on PEBS_ENABLE is still set, which triggers spurious NMIs. Check and disable PEBS for fixed counters after intel_pmu_disable_fixed(). Reported-by: Yi, Ammy <ammy.yi@intel.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: <stable@vger.kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Fixes: 6f55967ad9d9 ("perf/x86/intel: Fix race in intel_pmu_disable_event()") Link: https://lkml.kernel.org/r/20190625142135.22112-1-kan.liang@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* x86/boot: Fix memory leak in default_get_smp_config()David Rientjes2019-07-261-6/+4
| | | | | | | | | | | | | | | | | | | commit e74bd96989dd42a51a73eddb4a5510a6f5e42ac3 upstream. When default_get_smp_config() is called with early == 1 and mpf->feature1 is non-zero, mpf is leaked because the return path does not do early_memunmap(). Fix this and share a common exit routine. Fixes: 5997efb96756 ("x86/boot: Use memremap() to map the MPF and MPC data") Reported-by: Cfir Cohen <cfir@google.com> Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1907091942570.28240@chino.kir.corp.google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* x86/hyper-v: Zero out the VP ASSIST PAGE on allocationDexuan Cui2019-07-261-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit e320ab3cec7dd8b1606964d81ae1e14391ff8e96 upstream. The VP ASSIST PAGE is an "overlay" page (see Hyper-V TLFS's Section 5.2.1 "GPA Overlay Pages" for the details) and here is an excerpt: "The hypervisor defines several special pages that "overlay" the guest's Guest Physical Addresses (GPA) space. Overlays are addressed GPA but are not included in the normal GPA map maintained internally by the hypervisor. Conceptually, they exist in a separate map that overlays the GPA map. If a page within the GPA space is overlaid, any SPA page mapped to the GPA page is effectively "obscured" and generally unreachable by the virtual processor through processor memory accesses. If an overlay page is disabled, the underlying GPA page is "uncovered", and an existing mapping becomes accessible to the guest." SPA = System Physical Address = the final real physical address. When a CPU (e.g. CPU1) is onlined, hv_cpu_init() allocates the VP ASSIST PAGE and enables the EOI optimization for this CPU by writing the MSR HV_X64_MSR_VP_ASSIST_PAGE. From now on, hvp->apic_assist belongs to the special SPA page, and this CPU *always* uses hvp->apic_assist (which is shared with the hypervisor) to decide if it needs to write the EOI MSR. When a CPU is offlined then on the outgoing CPU: 1. hv_cpu_die() disables the EOI optimizaton for this CPU, and from now on hvp->apic_assist belongs to the original "normal" SPA page; 2. the remaining work of stopping this CPU is done 3. this CPU is completely stopped. Between 1 and 3, this CPU can still receive interrupts (e.g. reschedule IPIs from CPU0, and Local APIC timer interrupts), and this CPU *must* write the EOI MSR for every interrupt received, otherwise the hypervisor may not deliver further interrupts, which may be needed to completely stop the CPU. So, after the EOI optimization is disabled in hv_cpu_die(), it's required that the hvp->apic_assist's bit0 is zero, which is not guaranteed by the current allocation mode because it lacks __GFP_ZERO. As a consequence the bit might be set and interrupt handling would not write the EOI MSR causing interrupt delivery to become stuck. Add the missing __GFP_ZERO to the allocation. Note 1: after the "normal" SPA page is allocted and zeroed out, neither the hypervisor nor the guest writes into the page, so the page remains with zeros. Note 2: see Section 10.3.5 "EOI Assist" for the details of the EOI optimization. When the optimization is enabled, the guest can still write the EOI MSR register irrespective of the "No EOI required" value, but that's slower than the optimized assist based variant. Fixes: ba696429d290 ("x86/hyper-v: Implement EOI assist") Signed-off-by: Dexuan Cui <decui@microsoft.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/ <PU1P153MB0169B716A637FABF07433C04BFCB0@PU1P153MB0169.APCP153.PROD.OUTLOOK.COM Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ARM: dts: gemini: Set DIR-685 SPI CS as active lowLinus Walleij2019-07-261-1/+1
| | | | | | | | | | | | | | | commit f90b8fda3a9d72a9422ea80ae95843697f94ea4a upstream. The SPI to the display on the DIR-685 is active low, we were just saved by the SPI library enforcing active low on everything before, so set it as active low to avoid ambiguity. Link: https://lore.kernel.org/r/20190715202101.16060-1-linus.walleij@linaro.org Cc: stable@vger.kernel.org Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* arm64: Fix incorrect irqflag restore for priority maskingJulien Thierry2019-07-269-83/+123
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit bd82d4bd21880b7c4d5f5756be435095d6ae07b5 upstream. When using IRQ priority masking to disable interrupts, in order to deal with the PSR.I state, local_irq_save() would convert the I bit into a PMR value (GIC_PRIO_IRQOFF). This resulted in local_irq_restore() potentially modifying the value of PMR in undesired location due to the state of PSR.I upon flag saving [1]. In an attempt to solve this issue in a less hackish manner, introduce a bit (GIC_PRIO_IGNORE_PMR) for the PMR values that can represent whether PSR.I is being used to disable interrupts, in which case it takes precedence of the status of interrupt masking via PMR. GIC_PRIO_PSR_I_SET is chosen such that (<pmr_value> | GIC_PRIO_PSR_I_SET) does not mask more interrupts than <pmr_value> as some sections (e.g. arch_cpu_idle(), interrupt acknowledge path) requires PMR not to mask interrupts that could be signaled to the CPU when using only PSR.I. [1] https://www.spinics.net/lists/arm-kernel/msg716956.html Fixes: 4a503217ce37 ("arm64: irqflags: Use ICC_PMR_EL1 for interrupt masking") Cc: <stable@vger.kernel.org> # 5.1.x- Reported-by: Zenghui Yu <yuzenghui@huawei.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Wei Li <liwei391@huawei.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Pouloze <suzuki.poulose@arm.com> Cc: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* arm64: irqflags: Add condition flags to inline asm clobber listJulien Thierry2019-07-261-2/+2
| | | | | | | | | | | | | | | | | | commit f57065782f245ca96f1472209a485073bbc11247 upstream. Some of the inline assembly instruction use the condition flags and need to include "cc" in the clobber list. Fixes: 4a503217ce37 ("arm64: irqflags: Use ICC_PMR_EL1 for interrupt masking") Cc: <stable@vger.kernel.org> # 5.1.x- Suggested-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* arm64: tegra: Fix AGIC register rangeJon Hunter2019-07-261-1/+1
| | | | | | | | | | | | | | | | | | | commit ba24eee6686f6ed3738602b54d959253316a9541 upstream. The Tegra AGIC interrupt controller is an ARM GIC400 interrupt controller. Per the ARM GIC device-tree binding, the first address region is for the GIC distributor registers and the second address region is for the GIC CPU interface registers. The address space for the distributor registers is 4kB, but currently this is incorrectly defined as 8kB for the Tegra AGIC and overlaps with the CPU interface registers. Correct the address space for the distributor to be 4kB. Cc: stable@vger.kernel.org Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Fixes: bcdbde433542 ("arm64: tegra: Add AGIC node for Tegra210") Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KVM: x86/vPMU: refine kvm_pmu err msg when event creation failedLike Xu2019-07-261-2/+2
| | | | | | | | | | | | | | | | | commit 6fc3977ccc5d3c22e851f2dce2d3ce2a0a843842 upstream. If a perf_event creation fails due to any reason of the host perf subsystem, it has no chance to log the corresponding event for guest which may cause abnormal sampling data in guest result. In debug mode, this message helps to understand the state of vPMC and we may not limit the number of occurrences but not in a spamming style. Suggested-by: Joe Perches <joe@perches.com> Signed-off-by: Like Xu <like.xu@linux.intel.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KVM: PPC: Book3S HV: Fix CR0 setting in TM emulationMichael Neuling2019-07-261-3/+3
| | | | | | | | | | | | | | | | | | | | | | commit 3fefd1cd95df04da67c83c1cb93b663f04b3324f upstream. When emulating tsr, treclaim and trechkpt, we incorrectly set CR0. The code currently sets: CR0 <- 00 || MSR[TS] but according to the ISA it should be: CR0 <- 0 || MSR[TS] || 0 This fixes the bit shift to put the bits in the correct location. This is a data integrity issue as CR0 is corrupted. Fixes: 4bb3c7a0208f ("KVM: PPC: Book3S HV: Work around transactional memory bugs in POWER9") Cc: stable@vger.kernel.org # v4.17+ Tested-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KVM: PPC: Book3S HV: Clear pending decrementer exceptions on nested guest entrySuraj Jitindar Singh2019-07-261-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 3c25ab35fbc8526ac0c9b298e8a78e7ad7a55479 upstream. If we enter an L1 guest with a pending decrementer exception then this is cleared on guest exit if the guest has writtien a positive value into the decrementer (indicating that it handled the decrementer exception) since there is no other way to detect that the guest has handled the pending exception and that it should be dequeued. In the event that the L1 guest tries to run a nested (L2) guest immediately after this and the L2 guest decrementer is negative (which is loaded by L1 before making the H_ENTER_NESTED hcall), then the pending decrementer exception isn't cleared and the L2 entry is blocked since L1 has a pending exception, even though L1 may have already handled the exception and written a positive value for it's decrementer. This results in a loop of L1 trying to enter the L2 guest and L0 blocking the entry since L1 has an interrupt pending with the outcome being that L2 never gets to run and hangs. Fix this by clearing any pending decrementer exceptions when L1 makes the H_ENTER_NESTED hcall since it won't do this if it's decrementer has gone negative, and anyway it's decrementer has been communicated to L0 in the hdec_expires field and L0 will return control to L1 when this goes negative by delivering an H_DECREMENTER exception. Fixes: 95a6432ce903 ("KVM: PPC: Book3S HV: Streamlined guest entry/exit path on P9 for radix guests") Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KVM: PPC: Book3S HV: Signed extend decrementer value if not using large ↵Suraj Jitindar Singh2019-07-261-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | decrementer commit 869537709ebf1dc865e75c3fc97b23f8acf37c16 upstream. On POWER9 the decrementer can operate in large decrementer mode where the decrementer is 56 bits and signed extended to 64 bits. When not operating in this mode the decrementer behaves as a 32 bit decrementer which is NOT signed extended (as on POWER8). Currently when reading a guest decrementer value we don't take into account whether the large decrementer is enabled or not, and this means the value will be incorrect when the guest is not using the large decrementer. Fix this by sign extending the value read when the guest isn't using the large decrementer. Fixes: 95a6432ce903 ("KVM: PPC: Book3S HV: Streamlined guest entry/exit path on P9 for radix guests") Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KVM: VMX: check CPUID before allowing read/write of IA32_XSSWanpeng Li2019-07-261-2/+8
| | | | | | | | | | | | | | | | | | commit 4d763b168e9c5c366b05812c7bba7662e5ea3669 upstream. Raise #GP when guest read/write IA32_XSS, but the CPUID bits say that it shouldn't exist. Fixes: 203000993de5 (kvm: vmx: add MSR logic for XSAVES) Reported-by: Xiaoyao Li <xiaoyao.li@linux.intel.com> Reported-by: Tao Xu <tao3.xu@intel.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>