summaryrefslogtreecommitdiffstats
path: root/arch
Commit message (Collapse)AuthorAgeFilesLines
...
* xtensa: add __bswap{si,di}2 helpersMax Filippov2023-05-304-1/+42
| | | | | | | | | | | | | commit 034f4a7877c32a8efd6beee4d71ed14e424499a9 upstream. gcc-13 may generate calls for __bswap{si,di}2. This breaks the kernel build when optimization for size is selected. Add __bswap{si,di}2 helpers to fix that. Cc: stable@vger.kernel.org Fixes: 19c5699f9aff ("xtensa: don't link with libgcc") Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* xtensa: fix signal delivery to FDPIC processMax Filippov2023-05-301-8/+27
| | | | | | | | | | | | | commit 9c2cc74fb31ec76b8b118c97041a6a154a3ff219 upstream. Fetch function descriptor pointed to by the signal handler pointer from userspace on signal delivery and function pointer pointed to by the sa_restorer on return from the signal handler. Cc: stable@vger.kernel.org Fixes: e3ddb8bbe0f8 ("xtensa: add FDPIC and static PIE support for noMMU") Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* m68k: Move signal frame following exception on 68020/030Finn Thain2023-05-301-4/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit b845b574f86dcb6a70dfa698aa87a237b0878d2a upstream. On 68030/020, an instruction such as, moveml %a2-%a3/%a5,%sp@- may cause a stack page fault during instruction execution (i.e. not at an instruction boundary) and produce a format 0xB exception frame. In this situation, the value of USP will be unreliable. If a signal is to be delivered following the exception, this USP value is used to calculate the location for a signal frame. This can result in a corrupted user stack. The corruption was detected in dash (actually in glibc) where it showed up as an intermittent "stack smashing detected" message and crash following signal delivery for SIGCHLD. It was hard to reproduce that failure because delivery of the signal raced with the page fault and because the kernel places an unpredictable gap of up to 7 bytes between the USP and the signal frame. A format 0xB exception frame can be produced by a bus error or an address error. The 68030 Users Manual says that address errors occur immediately upon detection during instruction prefetch. The instruction pipeline allows prefetch to overlap with other instructions, which means an address error can arise during the execution of a different instruction. So it seems likely that this patch may help in the address error case also. Reported-and-tested-by: Stan Johnson <userm57@yahoo.com> Link: https://lore.kernel.org/all/CAMuHMdW3yD22_ApemzW_6me3adq6A458u1_F0v-1EYwK_62jPA@mail.gmail.com/ Cc: Michael Schmitz <schmitzmic@gmail.com> Cc: Andreas Schwab <schwab@linux-m68k.org> Cc: stable@vger.kernel.org Co-developed-by: Michael Schmitz <schmitzmic@gmail.com> Signed-off-by: Michael Schmitz <schmitzmic@gmail.com> Signed-off-by: Finn Thain <fthain@linux-m68k.org> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Link: https://lore.kernel.org/r/9e66262a754fcba50208aa424188896cc52a1dd1.1683365892.git.fthain@linux-m68k.org Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* x86/mm: Avoid incomplete Global INVLPG flushesDave Hansen2023-05-301-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit ce0b15d11ad837fbacc5356941712218e38a0a83 upstream. The INVLPG instruction is used to invalidate TLB entries for a specified virtual address. When PCIDs are enabled, INVLPG is supposed to invalidate TLB entries for the specified address for both the current PCID *and* Global entries. (Note: Only kernel mappings set Global=1.) Unfortunately, some INVLPG implementations can leave Global translations unflushed when PCIDs are enabled. As a workaround, never enable PCIDs on affected processors. I expect there to eventually be microcode mitigations to replace this software workaround. However, the exact version numbers where that will happen are not known today. Once the version numbers are set in stone, the processor list can be tweaked to only disable PCIDs on affected processors with affected microcode. Note: if anyone wants a quick fix that doesn't require patching, just stick 'nopcid' on your kernel command-line. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ARM: 9297/1: vfp: avoid unbalanced stack on 'success' return pathArd Biesheuvel2023-05-242-4/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 2b951b0efbaa6c805854b60c11f08811054d50cd ] Commit c76c6c4ecbec0deb5 ("ARM: 9294/2: vfp: Fix broken softirq handling with instrumentation enabled") updated the VFP exception entry logic to go via a C function, so that we get the compiler's version of local_bh_disable(), which may be instrumented, and isn't generally callable from assembler. However, this assumes that passing an alternative 'success' return address works in C as it does in asm, and this is only the case if the C calls in question are tail calls, as otherwise, the stack will need some unwinding as well. I have already sent patches to the list that replace most of the asm logic with C code, and so it is preferable to have a minimal fix that addresses the issue and can be backported along with the commit that it fixes to v6.3 from v6.4. Hopefully, we can land the C conversion for v6.5. So instead of passing the 'success' return address as a function argument, pass the stack address from where to pop it so that both LR and SP have the expected value. Fixes: c76c6c4ecbec0deb5 ("ARM: 9294/2: vfp: Fix broken softirq handling with ...") Reported-by: syzbot+d4b00edc2d0c910d4bf4@syzkaller.appspotmail.com Tested-by: syzbot+d4b00edc2d0c910d4bf4@syzkaller.appspotmail.com Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
* ARM: 9294/2: vfp: Fix broken softirq handling with instrumentation enabledArd Biesheuvel2023-05-244-34/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit c76c6c4ecbec0deb56a4f9e932b26866024a508f ] Commit 62b95a7b44d1 ("ARM: 9282/1: vfp: Manipulate task VFP state with softirqs disabled") replaced the en/disable preemption calls inside the VFP state handling code with en/disabling of soft IRQs, which is necessary to allow kernel use of the VFP/SIMD unit when handling a soft IRQ. Unfortunately, when lockdep is enabled (or other instrumentation that enables TRACE_IRQFLAGS), the disable path implemented in asm fails to perform the lockdep and RCU related bookkeeping, resulting in spurious warnings and other badness. Set let's rework the VFP entry code a little bit so we can make the local_bh_disable() call from C, with all the instrumentations that happen to have been configured. Calling local_bh_enable() can be done from asm, as it is a simple wrapper around __local_bh_enable_ip(), which is always a callable function. Link: https://lore.kernel.org/all/ZBBYCSZUJOWBg1s8@localhost.localdomain/ Fixes: 62b95a7b44d1 ("ARM: 9282/1: vfp: Manipulate task VFP state with softirqs disabled") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
* rethook, fprobe: do not trace rethook related functionsZe Gao2023-05-243-0/+4
| | | | | | | | | | | | | | | | | | commit 571a2a50a8fc546145ffd3bf673547e9fe128ed2 upstream. These functions are already marked as NOKPROBE to prevent recursion and we have the same reason to blacklist them if rethook is used with fprobe, since they are beyond the recursion-free region ftrace can guard. Link: https://lore.kernel.org/all/20230517034510.15639-5-zegao@tencent.com/ Fixes: f3a112c0c40d ("x86,rethook,kprobes: Replace kretprobe with rethook on x86") Signed-off-by: Ze Gao <zegao@tencent.com> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* arm64: mte: Do not set PG_mte_tagged if tags were not initializedPeter Collingbourne2023-05-241-5/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | commit c4c597f1b367433c52c531dccd6859a39b4580fb upstream. The mte_sync_page_tags() function sets PG_mte_tagged if it initializes page tags. Then we return to mte_sync_tags(), which sets PG_mte_tagged again. At best, this is redundant. However, it is possible for mte_sync_page_tags() to return without having initialized tags for the page, i.e. in the case where check_swap is true (non-compound page), is_swap_pte(old_pte) is false and pte_is_tagged is false. So at worst, we set PG_mte_tagged on a page with uninitialized tags. This can happen if, for example, page migration causes a PTE for an untagged page to be replaced. If the userspace program subsequently uses mprotect() to enable PROT_MTE for that page, the uninitialized tags will be exposed to userspace. Fix it by removing the redundant call to set_page_mte_tagged(). Fixes: e059853d14ca ("arm64: mte: Fix/clarify the PG_mte_tagged semantics") Signed-off-by: Peter Collingbourne <pcc@google.com> Cc: <stable@vger.kernel.org> # 6.1 Link: https://linux-review.googlesource.com/id/Ib02d004d435b2ed87603b858ef7480f7b1463052 Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Link: https://lore.kernel.org/r/20230420214327.2357985-1-pcc@google.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* arm64: Also reset KASAN tag if page is not PG_mte_taggedPeter Collingbourne2023-05-241-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 2efbafb91e12ff5a16cbafb0085e4c10c3fca493 upstream. Consider the following sequence of events: 1) A page in a PROT_READ|PROT_WRITE VMA is faulted. 2) Page migration allocates a page with the KASAN allocator, causing it to receive a non-match-all tag, and uses it to replace the page faulted in 1. 3) The program uses mprotect() to enable PROT_MTE on the page faulted in 1. As a result of step 3, we are left with a non-match-all tag for a page with tags accessible to userspace, which can lead to the same kind of tag check faults that commit e74a68468062 ("arm64: Reset KASAN tag in copy_highpage with HW tags only") intended to fix. The general invariant that we have for pages in a VMA with VM_MTE_ALLOWED is that they cannot have a non-match-all tag. As a result of step 2, the invariant is broken. This means that the fix in the referenced commit was incomplete and we also need to reset the tag for pages without PG_mte_tagged. Fixes: e5b8d9218951 ("arm64: mte: reset the page tag in page->flags") Cc: <stable@vger.kernel.org> # 5.15 Link: https://linux-review.googlesource.com/id/I7409cdd41acbcb215c2a7417c1e50d37b875beff Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20230420210945.2313627-1-pcc@google.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* s390/crypto: use vector instructions only if available for ChaCha20Heiko Carstens2023-05-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 8703dd6b238da0ec6c276e53836f8200983d3d9b upstream. Commit 349d03ffd5f6 ("crypto: s390 - add crypto library interface for ChaCha20") added a library interface to the s390 specific ChaCha20 implementation. However no check was added to verify if the required facilities are installed before branching into the assembler code. If compiled into the kernel, this will lead to the following crash, if vector instructions are not available: data exception: 0007 ilc:3 [#1] SMP Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.3.0-rc7+ #11 Hardware name: IBM 3931 A01 704 (KVM/Linux) Krnl PSW : 0704e00180000000 000000001857277a (chacha20_vx+0x32/0x818) R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3 Krnl GPRS: 0000037f0000000a ffffffffffffff60 000000008184b000 0000000019f5c8e6 0000000000000109 0000037fffb13c58 0000037fffb13c78 0000000019bb1780 0000037fffb13c58 0000000019f5c8e6 000000008184b000 0000000000000109 00000000802d8000 0000000000000109 0000000018571ebc 0000037fffb13718 Krnl Code: 000000001857276a: c07000b1f80b larl %r7,0000000019bb1780 0000000018572770: a708000a lhi %r0,10 #0000000018572774: e78950000c36 vlm %v24,%v25,0(%r5),0 >000000001857277a: e7a060000806 vl %v26,0(%r6),0 0000000018572780: e7bf70004c36 vlm %v27,%v31,0(%r7),4 0000000018572786: e70b00000456 vlr %v0,%v27 000000001857278c: e71800000456 vlr %v1,%v24 0000000018572792: e74b00000456 vlr %v4,%v27 Call Trace: [<000000001857277a>] chacha20_vx+0x32/0x818 Last Breaking-Event-Address: [<0000000018571eb6>] chacha20_crypt_s390.constprop.0+0x6e/0xd8 ---[ end trace 0000000000000000 ]--- Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b Fix this by adding a missing MACHINE_HAS_VX check. Fixes: 349d03ffd5f6 ("crypto: s390 - add crypto library interface for ChaCha20") Reported-by: Marc Hartmayer <mhartmay@linux.ibm.com> Cc: <stable@vger.kernel.org> # 5.19+ Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> [agordeev@linux.ibm.com: remove duplicates in commit message] Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc/bpf: populate extable entries only during the last passHari Bathini2023-05-241-0/+2
| | | | | | | | | | | | | | | | | | | | commit 35a4b8ce4ac00e940b46b1034916ccb22ce9bdef upstream. Since commit 85e031154c7c ("powerpc/bpf: Perform complete extra passes to update addresses"), two additional passes are performed to avoid space and CPU time wastage on powerpc. But these extra passes led to WARN_ON_ONCE() hits in bpf_add_extable_entry() as extable entries are populated again, during the extra pass, without resetting the index. Fix it by resetting entry index before repopulating extable entries, if and when there is an additional pass. Fixes: 85e031154c7c ("powerpc/bpf: Perform complete extra passes to update addresses") Cc: stable@vger.kernel.org # v6.3+ Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230425065829.18189-1-hbathini@linux.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc/64s/radix: Fix soft dirty trackingMichael Ellerman2023-05-241-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 66b2ca086210732954a7790d63d35542936fc664 upstream. It was reported that soft dirty tracking doesn't work when using the Radix MMU. The tracking is supposed to work by clearing the soft dirty bit for a mapping and then write protecting the PTE. If/when the page is written to, a page fault occurs and the soft dirty bit is added back via pte_mkdirty(). For example in wp_page_reuse(): entry = maybe_mkwrite(pte_mkdirty(entry), vma); if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1)) update_mmu_cache(vma, vmf->address, vmf->pte); Unfortunately on radix _PAGE_SOFTDIRTY is being dropped by radix__ptep_set_access_flags(), called from ptep_set_access_flags(), meaning the soft dirty bit is not set even though the page has been written to. Fix it by adding _PAGE_SOFTDIRTY to the set of bits that are able to be changed in radix__ptep_set_access_flags(). Fixes: b0b5e9b13047 ("powerpc/mm/radix: Add radix pte #defines") Cc: stable@vger.kernel.org # v4.7+ Reported-by: Dan HorĂ¡k <dan@danny.cz> Link: https://lore.kernel.org/r/20230511095558.56663a50f86bdc4cd97700b7@danny.cz Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230511114224.977423-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc/iommu: Incorrect DDW Table is referenced for SR-IOV deviceGaurav Batra2023-05-242-5/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 1f7aacc5eb9ed2cc17be7a90da5cd559effb9d59 upstream. For an SR-IOV device, while enabling DDW, a new table is created and added at index 1 in the group. In the below 2 scenarios, the table is incorrectly referenced at index 0 (which is where the table is for default DMA window). 1. When adding DDW This issue is exposed with "slub_debug". Error thrown out from dma_iommu_dma_supported() Warning: IOMMU offset too big for device mask mask: 0xffffffff, table offset: 0x800000000000000 2. During Dynamic removal of the PCI device. Error is from iommu_tce_table_put() since a NULL table pointer is passed in. Fixes: 381ceda88c4c ("powerpc/pseries/iommu: Make use of DDW for indirect mapping") Cc: stable@vger.kernel.org # v5.15+ Signed-off-by: Gaurav Batra <gbatra@linux.vnet.ibm.com> Reviewed-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230505184701.91613-1-gbatra@linux.vnet.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc/iommu: DMA address offset is incorrectly calculated with 2MB TCEsGaurav Batra2023-05-241-4/+7
| | | | | | | | | | | | | | | | | | | | | commit 096339ab84f36beae0b1db25e0ce63fb3873e8b2 upstream. When DMA window is backed by 2MB TCEs, the DMA address for the mapped page should be the offset of the page relative to the 2MB TCE. The code was incorrectly setting the DMA address to the beginning of the TCE range. Mellanox driver is reporting timeout trying to ENABLE_HCA for an SR-IOV ethernet port, when DMA window is backed by 2MB TCEs. Fixes: 387273118714 ("powerps/pseries/dma: Add support for 2M IOMMU page size") Cc: stable@vger.kernel.org # v5.16+ Signed-off-by: Gaurav Batra <gbatra@linux.vnet.ibm.com> Reviewed-by: Greg Joyce <gjoyce@linux.vnet.ibm.com> Reviewed-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230504175913.83844-1-gbatra@linux.vnet.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KVM: arm64: Infer the PA offset from IPA in stage-2 map walkerOliver Upton2023-05-242-4/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 1f0f4a2ef7a5693b135ce174e71f116db4bd684d upstream. Until now, the page table walker counted increments to the PA and IPA of a walk in two separate places. While the PA is incremented as soon as a leaf PTE is installed in stage2_map_walker_try_leaf(), the IPA is actually bumped in the generic table walker context. Critically, __kvm_pgtable_visit() rereads the PTE after the LEAF callback returns to work out if a table or leaf was installed, and only bumps the IPA for a leaf PTE. This arrangement worked fine when we handled faults behind the write lock, as the walker had exclusive access to the stage-2 page tables. However, commit 1577cb5823ce ("KVM: arm64: Handle stage-2 faults in parallel") started handling all stage-2 faults behind the read lock, opening up a race where a walker could increment the PA but not the IPA of a walk. Nothing good ensues, as the walker starts mapping with the incorrect IPA -> PA relationship. For example, assume that two vCPUs took a data abort on the same IPA. One observes that dirty logging is disabled, and the other observed that it is enabled: vCPU attempting PMD mapping vCPU attempting PTE mapping ====================================== ===================================== /* install PMD */ stage2_make_pte(ctx, leaf); data->phys += granule; /* replace PMD with a table */ stage2_try_break_pte(ctx, data->mmu); stage2_make_pte(ctx, table); /* table is observed */ ctx.old = READ_ONCE(*ptep); table = kvm_pte_table(ctx.old, level); /* * map walk continues w/o incrementing * IPA. */ __kvm_pgtable_walk(..., level + 1); Bring an end to the whole mess by using the IPA as the single source of truth for how far along a walk has gotten. Work out the correct PA to map by calculating the IPA offset from the beginning of the walk and add that to the starting physical address. Cc: stable@vger.kernel.org Fixes: 1577cb5823ce ("KVM: arm64: Handle stage-2 faults in parallel") Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230421071606.1603916-2-oliver.upton@linux.dev Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* parisc: Replace regular spinlock with spin_trylock on panic pathGuilherme G. Piccoli2023-05-242-4/+24
| | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 829632dae8321787525ee37dc4828bbe6edafdae ] The panic notifiers' callbacks execute in an atomic context, with interrupts/preemption disabled, and all CPUs not running the panic function are off, so it's very dangerous to wait on a regular spinlock, there's a risk of deadlock. Refactor the panic notifier of parisc/power driver to make use of spin_trylock - for that, we've added a second version of the soft-power function. Also, some comments were reorganized and trailing white spaces, useless header inclusion and blank lines were removed. Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jeroen Roovers <jer@xs4all.nl> Acked-by: Helge Deller <deller@gmx.de> # parisc Signed-off-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
* riscv: Fix EFI stub usage of KASAN instrumented strcmp functionAlexandre Ghiti2023-05-241-2/+0
| | | | | | | | | | | | | | | | | | [ Upstream commit 617955ca6e275c4dd0dcf5316fca7fc04a8f2fe6 ] The EFI stub must not use any KASAN instrumented code as the kernel proper did not initialize the thread pointer and the mapping for the KASAN shadow region. Avoid using the generic strcmp function, instead use the one in drivers/firmware/efi/libstub/string.c. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Atish Patra <atishp@rivosinc.com> Link: https://lore.kernel.org/r/20230203075232.274282-5-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc: Use of_property_present() for testing DT property presenceRob Herring2023-05-249-13/+12
| | | | | | | | | | | | | | | | | [ Upstream commit 857d423c74228cfa064f79ff3a16b163fdb8d542 ] It is preferred to use typed property access functions (i.e. of_property_read_<type> functions) rather than low-level of_get_property/of_find_property functions for reading properties. As part of this, convert of_get_property/of_find_property calls to the recently added of_property_present() helper when we just want to test for presence of a property and nothing more. Signed-off-by: Rob Herring <robh@kernel.org> [mpe: Drop change in ppc4xx_probe_pci_bridge(), formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230310144657.1541039-1-robh@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: qcom: sm6115-j606f: Add ramoops nodeKonrad Dybcio2023-05-241-0/+11
| | | | | | | | | | | [ Upstream commit 8b0ac59c2da69aaf8e65c6bd648a06b755975302 ] Add a ramoops node to enable retrieving crash logs. Signed-off-by: Konrad Dybcio <konrad.dybcio@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20230406-topic-lenovo_features-v2-1-625d7cb4a944@linaro.org Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: qcom: sdm845-polaris: Drop inexistent propertiesKonrad Dybcio2023-05-241-2/+0
| | | | | | | | | | | | | [ Upstream commit fbc3a1df2866608ca43e7e6d602f66208a5afd88 ] Drop the qcom,snoc-host-cap-skip-quirk that was never introduced to solve schema warnings. Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Konrad Dybcio <konrad.dybcio@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20230406-topic-ath10k_bindings-v3-2-00895afc7764@linaro.org Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: imx8mq-librem5: Remove dis_u3_susphy_quirk from usb_dwc3_0Sebastian Krzyszkowiak2023-05-241-1/+0
| | | | | | | | | | | [ Upstream commit cfe9de291bd2bbce18c5cd79e1dd582cbbacdb4f ] This reduces power consumption in system suspend by about 10%. Signed-off-by: Sebastian Krzyszkowiak <sebastian.krzyszkowiak@puri.sm> Signed-off-by: Martin Kepplinger <martin.kepplinger@puri.sm> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: qcom: msm8996: Add missing DWC3 quirksKonrad Dybcio2023-05-241-0/+3
| | | | | | | | | | | | [ Upstream commit d0af0537e28f6eace02deed63b585396de939213 ] Add missing dwc3 quirks from msm-3.18. Unfortunately, none of them make `dwc3-qcom 6af8800.usb: HS-PHY not in L2` go away. Signed-off-by: Konrad Dybcio <konrad.dybcio@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20230302011849.1873056-1-konrad.dybcio@linaro.org Signed-off-by: Sasha Levin <sashal@kernel.org>
* ARM: 9296/1: HP Jornada 7XX: fix kernel-doc warningsRandy Dunlap2023-05-241-1/+4
| | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 46dd6078dbc7e363a8bb01209da67015a1538929 ] Fix kernel-doc warnings from the kernel test robot: jornada720_ssp.c:24: warning: Function parameter or member 'jornada_ssp_lock' not described in 'DEFINE_SPINLOCK' jornada720_ssp.c:24: warning: expecting prototype for arch/arm/mac(). Prototype was for DEFINE_SPINLOCK() instead jornada720_ssp.c:34: warning: Function parameter or member 'byte' not described in 'jornada_ssp_reverse' jornada720_ssp.c:57: warning: Function parameter or member 'byte' not described in 'jornada_ssp_byte' jornada720_ssp.c:85: warning: Function parameter or member 'byte' not described in 'jornada_ssp_inout' Link: lore.kernel.org/r/202304210535.tWby3jWF-lkp@intel.com Fixes: 69ebb22277a5 ("[ARM] 4506/1: HP Jornada 7XX: Addition of SSP Platform Driver") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: kernel test robot <lkp@intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Kristoffer Ericson <Kristoffer.ericson@gmail.com> Cc: patches@armlinux.org.uk Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
* s390/mm: fix direct map accountingHeiko Carstens2023-05-173-5/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 81e8479649853ffafc714aca4a9c0262efd3160a ] Commit bb1520d581a3 ("s390/mm: start kernel with DAT enabled") did not implement direct map accounting in the early page table setup code. In result the reported values are bogus now: $cat /proc/meminfo ... DirectMap4k: 5120 kB DirectMap1M: 18446744073709546496 kB DirectMap2G: 0 kB Fix this by adding the missing accounting. The result looks sane again: $cat /proc/meminfo ... DirectMap4k: 6156 kB DirectMap1M: 2091008 kB DirectMap2G: 6291456 kB Fixes: bb1520d581a3 ("s390/mm: start kernel with DAT enabled") Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* s390/mm: rename POPULATE_ONE2ONE to POPULATE_DIRECTHeiko Carstens2023-05-171-4/+4
| | | | | | | | | | | | | | | [ Upstream commit 07fdd6627f7f9c72ed68d531653b56df81da9996 ] Architectures generally use the "direct map" wording for mapping the whole physical memory. Use that wording as well in arch/s390/boot/vmem.c, instead of "one to one" in order to avoid confusion. This also matches what is already done in arch/s390/mm/vmem.c. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* x86: fix clear_user_rep_good() exception handling annotationLinus Torvalds2023-05-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This code no longer exists in mainline, because it was removed in commit d2c95f9d6802 ("x86: don't use REP_GOOD or ERMS for user memory clearing") upstream. However, rather than backport the full range of x86 memory clearing and copying cleanups, fix the exception table annotation placement for the final 'rep movsb' in clear_user_rep_good(): rather than pointing at the actual instruction that did the user space access, it pointed to the register move just before it. That made sense from a code flow standpoint, but not from an actual usage standpoint: it means that if user access takes an exception, the exception handler won't actually find the instruction in the exception tables. As a result, rather than fixing it up and returning -EFAULT, it would then turn it into a kernel oops report instead, something like: BUG: unable to handle page fault for address: 0000000020081000 #PF: supervisor write access in kernel mode #PF: error_code(0x0002) - not-present page ... RIP: 0010:clear_user_rep_good+0x1c/0x30 arch/x86/lib/clear_page_64.S:147 ... Call Trace: __clear_user arch/x86/include/asm/uaccess_64.h:103 [inline] clear_user arch/x86/include/asm/uaccess_64.h:124 [inline] iov_iter_zero+0x709/0x1290 lib/iov_iter.c:800 iomap_dio_hole_iter fs/iomap/direct-io.c:389 [inline] iomap_dio_iter fs/iomap/direct-io.c:440 [inline] __iomap_dio_rw+0xe3d/0x1cd0 fs/iomap/direct-io.c:601 iomap_dio_rw+0x40/0xa0 fs/iomap/direct-io.c:689 ext4_dio_read_iter fs/ext4/file.c:94 [inline] ext4_file_read_iter+0x4be/0x690 fs/ext4/file.c:145 call_read_iter include/linux/fs.h:2183 [inline] do_iter_readv_writev+0x2e0/0x3b0 fs/read_write.c:733 do_iter_read+0x2f2/0x750 fs/read_write.c:796 vfs_readv+0xe5/0x150 fs/read_write.c:916 do_preadv+0x1b6/0x270 fs/read_write.c:1008 __do_sys_preadv2 fs/read_write.c:1070 [inline] __se_sys_preadv2 fs/read_write.c:1061 [inline] __x64_sys_preadv2+0xef/0x150 fs/read_write.c:1061 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd which then looks like a filesystem bug rather than the incorrect exception annotation that it is. [ The alternative to this one-liner fix is to take the upstream series that cleans this all up: 68674f94ffc9 ("x86: don't use REP_GOOD or ERMS for small memory copies") 20f3337d350c ("x86: don't use REP_GOOD or ERMS for small memory clearing") adfcf4231b8c ("x86: don't use REP_GOOD or ERMS for user memory copies") * d2c95f9d6802 ("x86: don't use REP_GOOD or ERMS for user memory clearing") 3639a535587d ("x86: move stac/clac from user copy routines into callers") 577e6a7fd50d ("x86: inline the 'rep movs' in user copies for the FSRM case") 8c9b6a88b7e2 ("x86: improve on the non-rep 'clear_user' function") 427fda2c8a49 ("x86: improve on the non-rep 'copy_user' function") * e046fe5a36a9 ("x86: set FSRS automatically on AMD CPUs that have FSRM") e1f2750edc4a ("x86: remove 'zerorest' argument from __copy_user_nocache()") 034ff37d3407 ("x86: rewrite '__copy_user_nocache' function") with either the whole series or at a minimum the two marked commits being needed to fix this issue ] Reported-by: syzbot <syzbot+401145a9a237779feb26@syzkaller.appspotmail.com> Link: https://syzkaller.appspot.com/bug?extid=401145a9a237779feb26 Fixes: 0db7058e8e23 ("x86/clear_user: Make it faster") Cc: Borislav Petkov <bp@alien8.de> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* x86/amd_nb: Add PCI ID for family 19h model 78hMario Limonciello2023-05-171-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | commit 23a5b8bb022c1e071ca91b1a9c10f0ad6a0966e9 upstream. Commit 310e782a99c7 ("platform/x86/amd: pmc: Utilize SMN index 0 for driver probe") switched to using amd_smn_read() which relies upon the misc PCI ID used by DF function 3 being included in a table. The ID for model 78h is missing in that table, so amd_smn_read() doesn't work. Add the missing ID into amd_nb, restoring s2idle on this system. [ bp: Simplify commit message. ] Fixes: 310e782a99c7 ("platform/x86/amd: pmc: Utilize SMN index 0 for driver probe") Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> # pci_ids.h Acked-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/20230427053338.16653-2-mario.limonciello@amd.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* perf/x86: Fix missing sample size update on AMD BRSNamhyung Kim2023-05-171-4/+2
| | | | | | | | | | | | | | | commit 90befef5a9e820ccccc33181ec14c015980300cc upstream. It missed to convert a PERF_SAMPLE_BRANCH_STACK user to call the new perf_sample_save_brstack() helper in order to update the dyn_size. This affects AMD Zen3 machines with the branch-brs event. Fixes: eb55b455ef9c ("perf/core: Add perf_sample_save_brstack() helper") Signed-off-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20230427030527.580841-1-namhyung@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* parisc: Fix encoding of swp_entry due to added SWP_EXCLUSIVE flagHelge Deller2023-05-171-4/+4
| | | | | | | | | | | | | | | | commit 6f9e98849edaa8aefc4030ff3500e41556e83ff7 upstream. Fix the __swp_offset() and __swp_entry() macros due to commit 6d239fc78c0b ("parisc/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE") which introduced the SWP_EXCLUSIVE flag by reusing the _PAGE_ACCESSED flag. Reported-by: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de> Tested-by: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Helge Deller <deller@gmx.de> Fixes: 6d239fc78c0b ("parisc/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE") Cc: <stable@vger.kernel.org> # v6.3+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ARM: dts: aspeed: romed8hm3: Fix GPIO polarity of system-fault LEDZev Weiss2023-05-171-1/+1
| | | | | | | | | | | | | commit a3fd10732d276d7cf372c6746a78a1c8b6aa7541 upstream. Turns out it's in fact not the same as the heartbeat LED. Signed-off-by: Zev Weiss <zev@bewilderbeest.net> Cc: stable@vger.kernel.org # v5.18+ Fixes: a9a3d60b937a ("ARM: dts: aspeed: Add ASRock ROMED8HM3 BMC") Link: https://lore.kernel.org/r/20230224000400.12226-2-zev@bewilderbeest.net Signed-off-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ARM: dts: s5pv210: correct MIPI CSIS clock nameKrzysztof Kozlowski2023-05-171-1/+1
| | | | | | | | | | | | | commit 665b9459bb53b8f19bd1541567e1fe9782c83c4b upstream. The Samsung S5P/Exynos MIPI CSIS bindings and Linux driver expect first clock name to be "csis". Otherwise the driver fails to probe. Fixes: 94ad0f6d9278 ("ARM: dts: Add Device tree for s5pv210 SoC") Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20230212185818.43503-2-krzysztof.kozlowski@linaro.org Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ARM: dts: exynos: fix WM8960 clock name in Itop EliteKrzysztof Kozlowski2023-05-171-1/+1
| | | | | | | | | | | | | commit 6c950c20da38debf1ed531e0b972bd8b53d1c11f upstream. The WM8960 Linux driver expects the clock to be named "mclk". Otherwise the clock will be ignored and not prepared/enabled by the driver. Cc: <stable@vger.kernel.org> Fixes: 339b2fb36a67 ("ARM: dts: exynos: Add TOPEET itop elite based board") Link: https://lore.kernel.org/r/20230217150627.779764-3-krzysztof.kozlowski@linaro.org Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ARM: dts: aspeed: asrock: Correct firmware flash SPI clocksZev Weiss2023-05-172-2/+2
| | | | | | | | | | | | | | | | commit 9dedb724446913ea7b1591b4b3d2e3e909090980 upstream. While I'm not aware of any problems that have occurred running these at 100 MHz, the official word from ASRock is that 50 MHz is the correct speed to use, so let's be safe and use that instead. Signed-off-by: Zev Weiss <zev@bewilderbeest.net> Cc: stable@vger.kernel.org Fixes: 2b81613ce417 ("ARM: dts: aspeed: Add ASRock E3C246D4I BMC") Fixes: a9a3d60b937a ("ARM: dts: aspeed: Add ASRock ROMED8HM3 BMC") Link: https://lore.kernel.org/r/20230224000400.12226-4-zev@bewilderbeest.net Signed-off-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* sh: nmi_debug: fix return value of __setup handlerRandy Dunlap2023-05-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | commit d1155e4132de712a9d3066e2667ceaad39a539c5 upstream. __setup() handlers should return 1 to obsolete_checksetup() in init/main.c to indicate that the boot option has been handled. A return of 0 causes the boot option/value to be listed as an Unknown kernel parameter and added to init's (limited) argument or environment strings. Also, error return codes don't mean anything to obsolete_checksetup() -- only non-zero (usually 1) or zero. So return 1 from nmi_debug_setup(). Fixes: 1e1030dccb10 ("sh: nmi_debug support.") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: Igor Zhbanov <izh1979@gmail.com> Link: lore.kernel.org/r/64644a2f-4a20-bab3-1e15-3b2cdd0defe3@omprussia.ru Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: linux-sh@vger.kernel.org Cc: stable@vger.kernel.org Reviewed-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Link: https://lore.kernel.org/r/20230306040037.20350-3-rdunlap@infradead.org Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* sh: init: use OF_EARLY_FLATTREE for early initRandy Dunlap2023-05-172-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 6cba655543c7959f8a6d2979b9d40a6a66b7ed4f upstream. When CONFIG_OF_EARLY_FLATTREE and CONFIG_SH_DEVICE_TREE are not set, SH3 build fails with a call to early_init_dt_scan(), so in arch/sh/kernel/setup.c and arch/sh/kernel/head_32.S, use CONFIG_OF_EARLY_FLATTREE instead of CONFIG_OF_FLATTREE. Fixes this build error: ../arch/sh/kernel/setup.c: In function 'sh_fdt_init': ../arch/sh/kernel/setup.c:262:26: error: implicit declaration of function 'early_init_dt_scan' [-Werror=implicit-function-declaration] 262 | if (!dt_virt || !early_init_dt_scan(dt_virt)) { Fixes: 03767daa1387 ("sh: fix build regression with CONFIG_OF && !CONFIG_OF_FLATTREE") Fixes: eb6b6930a70f ("sh: fix memory corruption of unflattened device tree") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Suggested-by: Rob Herring <robh+dt@kernel.org> Cc: Frank Rowand <frowand.list@gmail.com> Cc: devicetree@vger.kernel.org Cc: Rich Felker <dalias@libc.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Geert Uytterhoeven <geert+renesas@glider.be> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: linux-sh@vger.kernel.org Cc: stable@vger.kernel.org Reviewed-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Link: https://lore.kernel.org/r/20230306040037.20350-4-rdunlap@infradead.org Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* sh: mcount.S: fix build error when PRINTK is not enabledRandy Dunlap2023-05-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | commit c2bd1e18c6f85c0027da2e5e7753b9bfd9f8e6dc upstream. Fix a build error in mcount.S when CONFIG_PRINTK is not enabled. Fixes this build error: sh2-linux-ld: arch/sh/lib/mcount.o: in function `stack_panic': (.text+0xec): undefined reference to `dump_stack' Fixes: e460ab27b6c3 ("sh: Fix up stack overflow check with ftrace disabled.") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Suggested-by: Geert Uytterhoeven <geert@linux-m68k.org> Cc: stable@vger.kernel.org Reviewed-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Link: https://lore.kernel.org/r/20230306040037.20350-8-rdunlap@infradead.org Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* sh: math-emu: fix macro redefined warningRandy Dunlap2023-05-171-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 58a49ad90939386a8682e842c474a0d2c00ec39c upstream. Fix a warning that was reported by the kernel test robot: In file included from ../include/math-emu/soft-fp.h:27, from ../arch/sh/math-emu/math.c:22: ../arch/sh/include/asm/sfp-machine.h:17: warning: "__BYTE_ORDER" redefined 17 | #define __BYTE_ORDER __BIG_ENDIAN In file included from ../arch/sh/math-emu/math.c:21: ../arch/sh/math-emu/sfp-util.h:71: note: this is the location of the previous definition 71 | #define __BYTE_ORDER __LITTLE_ENDIAN Fixes: b929926f01f2 ("sh: define __BIG_ENDIAN for math-emu") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: kernel test robot <lkp@intel.com> Link: lore.kernel.org/r/202111121827.6v6SXtVv-lkp@intel.com Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: linux-sh@vger.kernel.org Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Cc: stable@vger.kernel.org Reviewed-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Link: https://lore.kernel.org/r/20230306040037.20350-5-rdunlap@infradead.org Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* x86/retbleed: Fix return thunk alignmentBorislav Petkov (AMD)2023-05-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | commit 9a48d604672220545d209e9996c2a1edbb5637f6 upstream. SYM_FUNC_START_LOCAL_NOALIGN() adds an endbr leading to this layout (leaving only the last 2 bytes of the address): 3bff <zen_untrain_ret>: 3bff: f3 0f 1e fa endbr64 3c03: f6 test $0xcc,%bl 3c04 <__x86_return_thunk>: 3c04: c3 ret 3c05: cc int3 3c06: 0f ae e8 lfence However, "the RET at __x86_return_thunk must be on a 64 byte boundary, for alignment within the BTB." Use SYM_START instead. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KVM: x86/mmu: Refresh CR0.WP prior to checking for emulated permission faultsSean Christopherson2023-05-172-1/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit cf9f4c0eb1699d306e348b1fd0225af7b2c282d3 ] Refresh the MMU's snapshot of the vCPU's CR0.WP prior to checking for permission faults when emulating a guest memory access and CR0.WP may be guest owned. If the guest toggles only CR0.WP and triggers emulation of a supervisor write, e.g. when KVM is emulating UMIP, KVM may consume a stale CR0.WP, i.e. use stale protection bits metadata. Note, KVM passes through CR0.WP if and only if EPT is enabled as CR0.WP is part of the MMU role for legacy shadow paging, and SVM (NPT) doesn't support per-bit interception controls for CR0. Don't bother checking for EPT vs. NPT as the "old == new" check will always be true under NPT, i.e. the only cost is the read of vcpu->arch.cr4 (SVM unconditionally grabs CR0 from the VMCB on VM-Exit). Reported-by: Mathias Krause <minipli@grsecurity.net> Link: https://lkml.kernel.org/r/677169b4-051f-fcae-756b-9a3e1bb9f8fe%40grsecurity.net Fixes: fb509f76acc8 ("KVM: VMX: Make CR0.WP a guest owned bit") Tested-by: Mathias Krause <minipli@grsecurity.net> Link: https://lore.kernel.org/r/20230405002608.418442-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Mathias Krause <minipli@grsecurity.net> # backport to v6.3.x Signed-off-by: Sasha Levin <sashal@kernel.org>
* KVM: VMX: Make CR0.WP a guest owned bitMathias Krause2023-05-174-4/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit fb509f76acc8d42bed11bca308404f81c2be856a ] Guests like grsecurity that make heavy use of CR0.WP to implement kernel level W^X will suffer from the implied VMEXITs. With EPT there is no need to intercept a guest change of CR0.WP, so simply make it a guest owned bit if we can do so. This implies that a read of a guest's CR0.WP bit might need a VMREAD. However, the only potentially affected user seems to be kvm_init_mmu() which is a heavy operation to begin with. But also most callers already cache the full value of CR0 anyway, so no additional VMREAD is needed. The only exception is nested_vmx_load_cr3(). This change is VMX-specific, as SVM has no such fine grained control register intercept control. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Mathias Krause <minipli@grsecurity.net> Link: https://lore.kernel.org/r/20230322013731.102955-7-minipli@grsecurity.net Co-developed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Mathias Krause <minipli@grsecurity.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
* KVM: x86: Make use of kvm_read_cr*_bits() when testing bitsMathias Krause2023-05-172-4/+4
| | | | | | | | | | | | | | | | [ Upstream commit 74cdc836919bf34684ef66f995273f35e2189daf ] Make use of the kvm_read_cr{0,4}_bits() helper functions when we only want to know the state of certain bits instead of the whole register. This not only makes the intent cleaner, it also avoids a potential VMREAD in case the tested bits aren't guest owned. Signed-off-by: Mathias Krause <minipli@grsecurity.net> Link: https://lore.kernel.org/r/20230322013731.102955-5-minipli@grsecurity.net Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Mathias Krause <minipli@grsecurity.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
* KVM: x86: Do not unload MMU roots when only toggling CR0.WP with TDP enabledMathias Krause2023-05-171-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 01b31714bd90be2784f7145bf93b7f78f3d081e1 ] There is no need to unload the MMU roots with TDP enabled when only CR0.WP has changed -- the paging structures are still valid, only the permission bitmap needs to be updated. One heavy user of toggling CR0.WP is grsecurity's KERNEXEC feature to implement kernel W^X. The optimization brings a huge performance gain for this case as the following micro-benchmark running 'ssdd 10 50000' from rt-tests[1] on a grsecurity L1 VM shows (runtime in seconds, lower is better): legacy TDP shadow kvm-x86/next@d8708b 8.43s 9.45s 70.3s +patch 5.39s 5.63s 70.2s For legacy MMU this is ~36% faster, for TDP MMU even ~40% faster. Also TDP and legacy MMU now both have a similar runtime which vanishes the need to disable TDP MMU for grsecurity. Shadow MMU sees no measurable difference and is still slow, as expected. [1] https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git Signed-off-by: Mathias Krause <minipli@grsecurity.net> Link: https://lore.kernel.org/r/20230322013731.102955-3-minipli@grsecurity.net Co-developed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Mathias Krause <minipli@grsecurity.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
* KVM: x86/mmu: Avoid indirect call for get_cr3Paolo Bonzini2023-05-172-12/+21
| | | | | | | | | | | | | | | | [ Upstream commit 2fdcc1b324189b5fb20655baebd40cd82e2bdf0c ] Most of the time, calls to get_guest_pgd result in calling kvm_read_cr3 (the exception is only nested TDP). Hardcode the default instead of using the get_cr3 function, avoiding a retpoline if they are enabled. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Mathias Krause <minipli@grsecurity.net> Link: https://lore.kernel.org/r/20230322013731.102955-2-minipli@grsecurity.net Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Mathias Krause <minipli@grsecurity.net> # backport to v6.3.x Signed-off-by: Sasha Levin <sashal@kernel.org>
* KVM: s390: fix race in gmap_make_secure()Claudio Imbrenda2023-05-171-21/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit c148dc8e2fa403be501612ee409db866eeed35c0 ] Fix a potential race in gmap_make_secure() and remove the last user of follow_page() without FOLL_GET. The old code is locking something it doesn't have a reference to, and as explained by Jason and David in this discussion: https://lore.kernel.org/linux-mm/Y9J4P%2FRNvY1Ztn0Q@nvidia.com/ it can lead to all kind of bad things, including the page getting unmapped (MADV_DONTNEED), freed, reallocated as a larger folio and the unlock_page() would target the wrong bit. There is also another race with the FOLL_WRITE, which could race between the follow_page() and the get_locked_pte(). The main point is to remove the last use of follow_page() without FOLL_GET or FOLL_PIN, removing the races can be considered a nice bonus. Link: https://lore.kernel.org/linux-mm/Y9J4P%2FRNvY1Ztn0Q@nvidia.com/ Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Fixes: 214d9bbcd3a6 ("s390/mm: provide memory management functions for protected KVM guests") Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20230428092753.27913-2-imbrenda@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* KVM: s390: pv: fix asynchronous teardown for small VMsClaudio Imbrenda2023-05-172-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 292a7d6fca33df70ca4b8e9b0d0e74adf87582dc ] On machines without the Destroy Secure Configuration Fast UVC, the topmost level of page tables is set aside and freed asynchronously as last step of the asynchronous teardown. Each gmap has a host_to_guest radix tree mapping host (userspace) addresses (with 1M granularity) to gmap segment table entries (pmds). If a guest is smaller than 2GB, the topmost level of page tables is the segment table (i.e. there are only 2 levels). Replacing it means that the pointers in the host_to_guest mapping would become stale and cause all kinds of nasty issues. This patch fixes the issue by disallowing asynchronous teardown for guests with only 2 levels of page tables. Userspace should (and already does) try using the normal destroy if the asynchronous one fails. Update s390_replace_asce so it refuses to replace segment type ASCEs. This is still needed in case the normal destroy VM fails. Fixes: fb491d5500a7 ("KVM: s390: pv: asynchronous destroy for reboot") Reviewed-by: Marc Hartmayer <mhartmay@linux.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20230421085036.52511-2-imbrenda@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: kernel: remove SHF_WRITE|SHF_EXECINSTR from .idmap.textndesaulniers@google.com2023-05-173-5/+5
| | | | | | | | | | | | | | | | | | | | [ Upstream commit 4df69e0df295822cdf816442fe4897f214cccb08 ] commit d54170812ef1 ("arm64: fix .idmap.text assertion for large kernels") modified some of the section assembler directives that declare .idmap.text to be SHF_ALLOC instead of SHF_ALLOC|SHF_WRITE|SHF_EXECINSTR. This patch fixes up the remaining stragglers that were left behind. Add Fixes tag so that this doesn't precede related change in stable. Fixes: d54170812ef1 ("arm64: fix .idmap.text assertion for large kernels") Reported-by: Greg Thelen <gthelen@google.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20230428-awx-v2-1-b197ffa16edc@google.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* riscv: compat_syscall_table: Fixup compile warningGuo Ren2023-05-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit f9c4bbddece7eff1155c70d48e3c9c2a01b9d778 ] ../arch/riscv/kernel/compat_syscall_table.c:12:41: warning: initialized field overwritten [-Woverride-init] 12 | #define __SYSCALL(nr, call) [nr] = (call), | ^ ../include/uapi/asm-generic/unistd.h:567:1: note: in expansion of macro '__SYSCALL' 567 | __SYSCALL(__NR_semget, sys_semget) Fixes: 59c10c52f573 ("riscv: compat: syscall: Add compat_sys_call_table implementation") Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Reported-by: kernel test robot <lkp@intel.com> Tested-by: Jisheng Zhang <jszhang@kernel.org> Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org> Signed-off-by: Drew Fustini <dfustini@baylibre.com> Link: https://lore.kernel.org/r/20230501223353.2833899-1-dfustini@baylibre.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* RISC-V: mm: Enable huge page support to kernel_page_present() functionSia Jee Heng2023-05-171-0/+8
| | | | | | | | | | | | | | | | | | | | [ Upstream commit a15c90b67a662c75f469822a7f95c7aaa049e28f ] Currently kernel_page_present() function doesn't support huge page detection causes the function to mistakenly return false to the hibernation core. Add huge page detection to the function to solve the problem. Fixes: 9e953cda5cdf ("riscv: Introduce huge page support for 32/64bit kernel") Signed-off-by: Sia Jee Heng <jeeheng.sia@starfivetech.com> Reviewed-by: Ley Foon Tan <leyfoon.tan@starfivetech.com> Reviewed-by: Mason Huo <mason.huo@starfivetech.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230330064321.1008373-4-jeeheng.sia@starfivetech.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: Fix label placement in record_mmu_state()Neeraj Upadhyay2023-05-171-2/+2
| | | | | | | | | | | | | | [ Upstream commit 4e8f6e44bce8da3b0e2df37b12839f4bc9c9cabe ] Fix label so that pre_disable_mmu_workaround() is called before clearing sctlr_el1.M. Fixes: 2ced0f30a426 ("arm64: head: Switch endianness before populating the ID map") Signed-off-by: Neeraj Upadhyay <quic_neeraju@quicinc.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20230425095700.22005-1-quic_neeraju@quicinc.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* ia64: fix an addr to taddr in huge_pte_offset()Hugh Dickins2023-05-111-1/+1
| | | | | | | | | | | | | | | | | commit 3647ebcfbfca384840231fe13fae665453238a61 upstream. I know nothing of ia64 htlbpage_to_page(), but guess that the p4d line should be using taddr rather than addr, like everywhere else. Link: https://lkml.kernel.org/r/732eae88-3beb-246-2c72-281de786740@google.com Fixes: c03ab9e32a2c ("ia64: add support for folded p4d page tables") Signed-off-by: Hugh Dickins <hughd@google.com Acked-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>