summaryrefslogtreecommitdiffstats
path: root/arch
Commit message (Collapse)AuthorAgeFilesLines
...
* KVM: x86: Move x2APIC ICR helper above kvm_apic_write_nodecode()Sean Christopherson2024-10-041-23/+23
| | | | | | | | | | | | | | | commit d33234342f8b468e719e05649fd26549fb37ef8a upstream. Hoist kvm_x2apic_icr_write() above kvm_apic_write_nodecode() so that a local helper to _read_ the x2APIC ICR can be added and used in the nodecode path without needing a forward declaration. No functional change intended. Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20240719235107.3023592-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KVM: x86: Enforce x2APIC's must-be-zero reserved ICR bitsSean Christopherson2024-10-041-1/+14
| | | | | | | | | | | | | | | | | | | | | commit 71bf395a276f0578d19e0ae137a7d1d816d08e0e upstream. Inject a #GP on a WRMSR(ICR) that attempts to set any reserved bits that are must-be-zero on both Intel and AMD, i.e. any reserved bits other than the BUSY bit, which Intel ignores and basically says is undefined. KVM's xapic_state_test selftest has been fudging the bug since commit 4b88b1a518b3 ("KVM: selftests: Enhance handling WRMSR ICR register in x2APIC mode"), which essentially removed the testcase instead of fixing the bug. WARN if the nodecode path triggers a #GP, as the CPU is supposed to check reserved bits for ICR when it's partially virtualized. Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20240719235107.3023592-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KVM: arm64: Add memory length checks and remove inline in do_ffa_mem_xferSnehal Koukuntla2024-10-041-6/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit f26a525b77e040d584e967369af1e018d2d59112 upstream. When we share memory through FF-A and the description of the buffers exceeds the size of the mapped buffer, the fragmentation API is used. The fragmentation API allows specifying chunks of descriptors in subsequent FF-A fragment calls and no upper limit has been established for this. The entire memory region transferred is identified by a handle which can be used to reclaim the transferred memory. To be able to reclaim the memory, the description of the buffers has to fit in the ffa_desc_buf. Add a bounds check on the FF-A sharing path to prevent the memory reclaim from failing. Also do_ffa_mem_xfer() does not need __always_inline, except for the BUILD_BUG_ON() aspect, which gets moved to a macro. [maz: fixed the BUILD_BUG_ON() breakage with LLVM, thanks to Wei-Lin Chang for the timely report] Fixes: 634d90cf0ac65 ("KVM: arm64: Handle FFA_MEM_LEND calls from the host") Cc: stable@vger.kernel.org Reviewed-by: Sebastian Ene <sebastianene@google.com> Signed-off-by: Snehal Koukuntla <snehalreddy@google.com> Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20240909180154.3267939-1-snehalreddy@google.com Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* xen: allow mapping ACPI data using a different physical addressJuergen Gross2024-10-048-1/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 9221222c717dbddac1e3c49906525475d87a3a44 upstream. When running as a Xen PV dom0 the system needs to map ACPI data of the host using host physical addresses, while those addresses can conflict with the guest physical addresses of the loaded linux kernel. The same problem might apply in case a PV guest is configured to use the host memory map. This conflict can be solved by mapping the ACPI data to a different guest physical address, but mapping the data via acpi_os_ioremap() must still be possible using the host physical address, as this address might be generated by AML when referencing some of the ACPI data. When configured to support running as a Xen PV domain, have an implementation of acpi_os_ioremap() being aware of the possibility to need above mentioned translation of a host physical address to the guest physical address. This modification requires to #include linux/acpi.h in some sources which need to include asm/acpi.h directly. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* xen: move checks for e820 conflicts further upJuergen Gross2024-10-041-22/+22
| | | | | | | | | | | | | | | commit c4498ae316da5b5786ccd448fc555f3339b8e4ca upstream. Move the checks for e820 memory map conflicts using the xen_chk_is_e820_usable() helper further up in order to prepare resolving some of the possible conflicts by doing some e820 map modifications, which must happen before evaluating the RAM layout. Signed-off-by: Juergen Gross <jgross@suse.com> Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ep93xx: clock: Fix off by one in ep93xx_div_recalc_rate()Dan Carpenter2024-10-041-1/+1
| | | | | | | | | | | | | | | | | | [ Upstream commit c7f06284a6427475e3df742215535ec3f6cd9662 ] The psc->div[] array has psc->num_div elements. These values come from when we call clk_hw_register_div(). It's adc_divisors and ARRAY_SIZE(adc_divisors)) and so on. So this condition needs to be >= instead of > to prevent an out of bounds read. Fixes: 9645ccc7bd7a ("ep93xx: clock: convert in-place to COMMON_CLK") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Acked-by: Alexander Sverdlin <alexander.sverdlin@gmail.com> Reviewed-by: Nikita Shubin <nikita.shubin@maquefel.me> Signed-off-by: Alexander Sverdlin <alexander.sverdlin@gmail.com> Link: https://lore.kernel.org/r/1caf01ad4c0a8069535813c26c7f0b8ea011155e.camel@linaro.org Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
* crypto: powerpc/p10-aes-gcm - Disable CRYPTO_AES_GCM_P10Danny Tsen2024-10-041-0/+1
| | | | | | | | | | | | | | [ Upstream commit 44ac4625ea002deecd0c227336c95b724206c698 ] Data mismatch found when testing ipsec tunnel with AES/GCM crypto. Disabling CRYPTO_AES_GCM_P10 in Kconfig for this feature. Fixes: fd0e9b3e2ee6 ("crypto: p10-aes-gcm - An accelerated AES/GCM stitched implementation") Fixes: cdcecfd9991f ("crypto: p10-aes-gcm - Glue code for AES/GCM stitched implementation") Fixes: 45a4672b9a6e2 ("crypto: p10-aes-gcm - Update Kconfig and Makefile") Signed-off-by: Danny Tsen <dtsen@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* riscv: Fix fp alignment bug in perf_callchain_user()Jinjie Ruan2024-10-041-1/+1
| | | | | | | | | | | | | | | | | | | [ Upstream commit 22ab08955ea13be04a8efd20cc30890e0afaa49c ] The standard RISC-V calling convention said: "The stack grows downward and the stack pointer is always kept 16-byte aligned". So perf_callchain_user() should check whether 16-byte aligned for fp. Link: https://riscv.org/wp-content/uploads/2015/01/riscv-calling.pdf Fixes: dbeb90b0c1eb ("riscv: Add perf callchain support") Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Cc: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/r/20240708032847.2998158-2-ruanjinjie@huawei.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* x86/PCI: Check pcie_find_root_port() return for NULLSamasth Norway Ananda2024-10-041-2/+2
| | | | | | | | | | | | | | | | | | [ Upstream commit dbc3171194403d0d40e4bdeae666f6e76e428b53 ] If pcie_find_root_port() is unable to locate a Root Port, it will return NULL. Check the pointer for NULL before dereferencing it. This particular case is in a quirk for devices that are always below a Root Port, so this won't avoid a problem and doesn't need to be backported, but check as a matter of style and to prevent copy/paste mistakes. Link: https://lore.kernel.org/r/20240812202659.1649121-1-samasth.norway.ananda@oracle.com Signed-off-by: Samasth Norway Ananda <samasth.norway.ananda@oracle.com> [bhelgaas: drop Fixes: and explain why there's no problem in this case] Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* bpf, arm64: Fix tailcall hierarchyLeon Hwang2024-10-041-16/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 66ff4d61dc124eafe9efaeaef696a09b7f236da2 ] This patch fixes a tailcall issue caused by abusing the tailcall in bpf2bpf feature on arm64 like the way of "bpf, x64: Fix tailcall hierarchy". On arm64, when a tail call happens, it uses tail_call_cnt_ptr to increment tail_call_cnt, too. At the prologue of main prog, it has to initialize tail_call_cnt and prepare tail_call_cnt_ptr. At the prologue of subprog, it pushes x26 register twice, and does not initialize tail_call_cnt. At the epilogue, it pops x26 twice, no matter whether it is main prog or subprog. Fixes: d4609a5d8c70 ("bpf, arm64: Keep tail call count across bpf2bpf calls") Acked-by: Puranjay Mohan <puranjay@kernel.org> Signed-off-by: Leon Hwang <hffilwlqm@gmail.com> Link: https://lore.kernel.org/r/20240714123902.32305-3-hffilwlqm@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* bpf, x64: Fix tailcall hierarchyLeon Hwang2024-10-041-28/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 116e04ba1459fc08f80cf27b8c9f9f188be0fcb2 ] This patch fixes a tailcall issue caused by abusing the tailcall in bpf2bpf feature. As we know, tail_call_cnt propagates by rax from caller to callee when to call subprog in tailcall context. But, like the following example, MAX_TAIL_CALL_CNT won't work because of missing tail_call_cnt back-propagation from callee to caller. \#include <linux/bpf.h> \#include <bpf/bpf_helpers.h> \#include "bpf_legacy.h" struct { __uint(type, BPF_MAP_TYPE_PROG_ARRAY); __uint(max_entries, 1); __uint(key_size, sizeof(__u32)); __uint(value_size, sizeof(__u32)); } jmp_table SEC(".maps"); int count = 0; static __noinline int subprog_tail1(struct __sk_buff *skb) { bpf_tail_call_static(skb, &jmp_table, 0); return 0; } static __noinline int subprog_tail2(struct __sk_buff *skb) { bpf_tail_call_static(skb, &jmp_table, 0); return 0; } SEC("tc") int entry(struct __sk_buff *skb) { volatile int ret = 1; count++; subprog_tail1(skb); subprog_tail2(skb); return ret; } char __license[] SEC("license") = "GPL"; At run time, the tail_call_cnt in entry() will be propagated to subprog_tail1() and subprog_tail2(). But, when the tail_call_cnt in subprog_tail1() updates when bpf_tail_call_static(), the tail_call_cnt in entry() won't be updated at the same time. As a result, in entry(), when tail_call_cnt in entry() is less than MAX_TAIL_CALL_CNT and subprog_tail1() returns because of MAX_TAIL_CALL_CNT limit, bpf_tail_call_static() in suprog_tail2() is able to run because the tail_call_cnt in subprog_tail2() propagated from entry() is less than MAX_TAIL_CALL_CNT. So, how many tailcalls are there for this case if no error happens? From top-down view, does it look like hierarchy layer and layer? With this view, there will be 2+4+8+...+2^33 = 2^34 - 2 = 17,179,869,182 tailcalls for this case. How about there are N subprog_tail() in entry()? There will be almost N^34 tailcalls. Then, in this patch, it resolves this case on x86_64. In stead of propagating tail_call_cnt from caller to callee, it propagates its pointer, tail_call_cnt_ptr, tcc_ptr for short. However, where does it store tail_call_cnt? It stores tail_call_cnt on the stack of main prog. When tail call happens in subprog, it increments tail_call_cnt by tcc_ptr. Meanwhile, it stores tail_call_cnt_ptr on the stack of main prog, too. And, before jump to tail callee, it has to pop tail_call_cnt and tail_call_cnt_ptr. Then, at the prologue of subprog, it must not make rax as tail_call_cnt_ptr again. It has to reuse tail_call_cnt_ptr from caller. As a result, at run time, it has to recognize rax is tail_call_cnt or tail_call_cnt_ptr at prologue by: 1. rax is tail_call_cnt if rax is <= MAX_TAIL_CALL_CNT. 2. rax is tail_call_cnt_ptr if rax is > MAX_TAIL_CALL_CNT, because a pointer won't be <= MAX_TAIL_CALL_CNT. Here's an example to dump JITed. struct { __uint(type, BPF_MAP_TYPE_PROG_ARRAY); __uint(max_entries, 1); __uint(key_size, sizeof(__u32)); __uint(value_size, sizeof(__u32)); } jmp_table SEC(".maps"); int count = 0; static __noinline int subprog_tail(struct __sk_buff *skb) { bpf_tail_call_static(skb, &jmp_table, 0); return 0; } SEC("tc") int entry(struct __sk_buff *skb) { int ret = 1; count++; subprog_tail(skb); subprog_tail(skb); return ret; } When bpftool p d j id 42: int entry(struct __sk_buff * skb): bpf_prog_0c0f4c2413ef19b1_entry: ; int entry(struct __sk_buff *skb) 0: endbr64 4: nopl (%rax,%rax) 9: xorq %rax, %rax ;; rax = 0 (tail_call_cnt) c: pushq %rbp d: movq %rsp, %rbp 10: endbr64 14: cmpq $33, %rax ;; if rax > 33, rax = tcc_ptr 18: ja 0x20 ;; if rax > 33 goto 0x20 ---+ 1a: pushq %rax ;; [rbp - 8] = rax = 0 | 1b: movq %rsp, %rax ;; rax = rbp - 8 | 1e: jmp 0x21 ;; ---------+ | 20: pushq %rax ;; <--------|---------------+ 21: pushq %rax ;; <--------+ [rbp - 16] = rax 22: pushq %rbx ;; callee saved 23: movq %rdi, %rbx ;; rbx = skb (callee saved) ; count++; 26: movabsq $-82417199407104, %rdi 30: movl (%rdi), %esi 33: addl $1, %esi 36: movl %esi, (%rdi) ; subprog_tail(skb); 39: movq %rbx, %rdi ;; rdi = skb 3c: movq -16(%rbp), %rax ;; rax = tcc_ptr 43: callq 0x80 ;; call subprog_tail() ; subprog_tail(skb); 48: movq %rbx, %rdi ;; rdi = skb 4b: movq -16(%rbp), %rax ;; rax = tcc_ptr 52: callq 0x80 ;; call subprog_tail() ; return ret; 57: movl $1, %eax 5c: popq %rbx 5d: leave 5e: retq int subprog_tail(struct __sk_buff * skb): bpf_prog_3a140cef239a4b4f_subprog_tail: ; int subprog_tail(struct __sk_buff *skb) 0: endbr64 4: nopl (%rax,%rax) 9: nopl (%rax) ;; do not touch tail_call_cnt c: pushq %rbp d: movq %rsp, %rbp 10: endbr64 14: pushq %rax ;; [rbp - 8] = rax (tcc_ptr) 15: pushq %rax ;; [rbp - 16] = rax (tcc_ptr) 16: pushq %rbx ;; callee saved 17: pushq %r13 ;; callee saved 19: movq %rdi, %rbx ;; rbx = skb ; asm volatile("r1 = %[ctx]\n\t" 1c: movabsq $-105487587488768, %r13 ;; r13 = jmp_table 26: movq %rbx, %rdi ;; 1st arg, skb 29: movq %r13, %rsi ;; 2nd arg, jmp_table 2c: xorl %edx, %edx ;; 3rd arg, index = 0 2e: movq -16(%rbp), %rax ;; rax = [rbp - 16] (tcc_ptr) 35: cmpq $33, (%rax) 39: jae 0x4e ;; if *tcc_ptr >= 33 goto 0x4e --------+ 3b: jmp 0x4e ;; jmp bypass, toggled by poking | 40: addq $1, (%rax) ;; (*tcc_ptr)++ | 44: popq %r13 ;; callee saved | 46: popq %rbx ;; callee saved | 47: popq %rax ;; undo rbp-16 push | 48: popq %rax ;; undo rbp-8 push | 49: nopl (%rax,%rax) ;; tail call target, toggled by poking | ; return 0; ;; | 4e: popq %r13 ;; restore callee saved <--------------+ 50: popq %rbx ;; restore callee saved 51: leave 52: retq Furthermore, when trampoline is the caller of bpf prog, which is tail_call_reachable, it is required to propagate rax through trampoline. Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") Fixes: e411901c0b77 ("bpf: allow for tailcalls in BPF subprograms for x64 JIT") Reviewed-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Leon Hwang <hffilwlqm@gmail.com> Link: https://lore.kernel.org/r/20240714123902.32305-2-hffilwlqm@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* xen: tolerate ACPI NVS memory overlapping with Xen allocated memoryJuergen Gross2024-10-041-1/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit be35d91c8880650404f3bf813573222dfb106935 ] In order to minimize required special handling for running as Xen PV dom0, the memory layout is modified to match that of the host. This requires to have only RAM at the locations where Xen allocated memory is living. Unfortunately there seem to be some machines, where ACPI NVS is located at 64 MB, resulting in a conflict with the loaded kernel or the initial page tables built by Xen. Avoid this conflict by swapping the ACPI NVS area in the memory map with unused RAM. This is possible via modification of the dom0 P2M map. Accesses to the ACPI NVS area are done either for saving and restoring it across suspend operations (this will work the same way as before), or by ACPI code when NVS memory is referenced from other ACPI tables. The latter case is handled by a Xen specific indirection of acpi_os_ioremap(). While the E820 map can (and should) be modified right away, the P2M map can be updated only after memory allocation is working, as the P2M map might need to be extended. Fixes: 808fdb71936c ("xen: check for kernel memory conflicting with memory layout") Signed-off-by: Juergen Gross <jgross@suse.com> Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* xen: add capability to remap non-RAM pages to different PFNsJuergen Gross2024-10-042-0/+66
| | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit d05208cf7f05420ad10cc7f9550f91d485523659 ] When running as a Xen PV dom0 it can happen that the kernel is being loaded to a guest physical address conflicting with the host memory map. In order to be able to resolve this conflict, add the capability to remap non-RAM areas to different guest PFNs. A function to use this remapping information for other purposes than doing the remap will be added when needed. As the number of conflicts should be rather low (currently only machines with max. 1 conflict are known), save the remap data in a small statically allocated array. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com> Stable-dep-of: be35d91c8880 ("xen: tolerate ACPI NVS memory overlapping with Xen allocated memory") Signed-off-by: Sasha Levin <sashal@kernel.org>
* xen: move max_pfn in xen_memory_setup() out of function scopeJuergen Gross2024-10-041-26/+26
| | | | | | | | | | | | | | | | | | [ Upstream commit 43dc2a0f479b9cd30f6674986d7a40517e999d31 ] Instead of having max_pfn as a local variable of xen_memory_setup(), make it a static variable in setup.c instead. This avoids having to pass it to subfunctions, which will be needed in more cases in future. Rename it to ini_nr_pages, as the value denotes the currently usable number of memory pages as passed from the hypervisor at boot time. Signed-off-by: Juergen Gross <jgross@suse.com> Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com> Stable-dep-of: be35d91c8880 ("xen: tolerate ACPI NVS memory overlapping with Xen allocated memory") Signed-off-by: Sasha Levin <sashal@kernel.org>
* xen: introduce generic helper checking for memory map conflictsJuergen Gross2024-10-043-11/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit ba88829706e2c5b7238638fc2b0713edf596495e ] When booting as a Xen PV dom0 the memory layout of the dom0 is modified to match that of the host, as this requires less changes in the kernel for supporting Xen. There are some cases, though, which are problematic, as it is the Xen hypervisor selecting the kernel's load address plus some other data, which might conflict with the host's memory map. These conflicts are detected at boot time and result in a boot error. In order to support handling at least some of these conflicts in future, introduce a generic helper function which will later gain the ability to adapt the memory layout when possible. Add the missing check for the xen_start_info area. Note that possible p2m map and initrd memory conflicts are handled already by copying the data to memory areas not conflicting with the memory map. The initial stack allocated by Xen doesn't need to be checked, as early boot code is switching to the statically allocated initial kernel stack. Initial page tables and the kernel itself will be handled later. Signed-off-by: Juergen Gross <jgross@suse.com> Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com> Stable-dep-of: be35d91c8880 ("xen: tolerate ACPI NVS memory overlapping with Xen allocated memory") Signed-off-by: Sasha Levin <sashal@kernel.org>
* minmax: avoid overly complex min()/max() macro arguments in xenLinus Torvalds2024-10-041-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit e8432ac802a028eaee6b1e86383d7cd8e9fb8431 ] We have some very fancy min/max macros that have tons of sanity checking to warn about mixed signedness etc. This is all things that a sane compiler should warn about, but there are no sane compiler interfaces for this, and '-Wsign-compare' is broken [1] and not useful. So then we compensate (some would say over-compensate) by doing the checks manually with some truly horrid macro games. And no, we can't just use __builtin_types_compatible_p(), because the whole question of "does it make sense to compare these two values" is a lot more complicated than that. For example, it makes a ton of sense to compare unsigned values with simple constants like "5", even if that is indeed a signed type. So we have these very strange macros to try to make sensible type checking decisions on the arguments to 'min()' and 'max()'. But that can cause enormous code expansion if the min()/max() macros are used with complicated expressions, and particularly if you nest these things so that you get the first big expansion then expanded again. The xen setup.c file ended up ballooning to over 50MB of preprocessed noise that takes 15s to compile (obviously depending on the build host), largely due to one single line. So let's split that one single line to just be simpler. I think it ends up being more legible to humans too at the same time. Now that single file compiles in under a second. Reported-and-reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Link: https://lore.kernel.org/all/c83c17bb-be75-4c67-979d-54eee38774c6@lucifer.local/ Link: https://staticthinking.wordpress.com/2023/07/25/wsign-compare-is-garbage/ [1] Cc: David Laight <David.Laight@aculab.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Stable-dep-of: be35d91c8880 ("xen: tolerate ACPI NVS memory overlapping with Xen allocated memory") Signed-off-by: Sasha Levin <sashal@kernel.org>
* xen: use correct end address of kernel for conflict checkingJuergen Gross2024-10-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit fac1bceeeb04886fc2ee952672e6e6c85ce41dca ] When running as a Xen PV dom0 the kernel is loaded by the hypervisor using a different memory map than that of the host. In order to minimize the required changes in the kernel, the kernel adapts its memory map to that of the host. In order to do that it is checking for conflicts of its load address with the host memory map. Unfortunately the tested memory range does not include the .brk area, which might result in crashes or memory corruption when this area does conflict with the memory map of the host. Fix the test by using the _end label instead of __bss_stop. Fixes: 808fdb71936c ("xen: check for kernel memory conflicting with memory layout") Signed-off-by: Juergen Gross <jgross@suse.com> Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc/vdso: Inconditionally use CFUNC macroChristophe Leroy2024-10-041-4/+0
| | | | | | | | | | | | | | [ Upstream commit 65948b0e716a47382731889ee6bbb18642b8b003 ] During merge of commit 4e991e3c16a3 ("powerpc: add CFUNC assembly label annotation") a fallback version of CFUNC macro was added at the last minute, so it can be used inconditionally. Fixes: 4e991e3c16a3 ("powerpc: add CFUNC assembly label annotation") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/0fa863f2f69b2ca4094ae066fcf1430fb31110c9.1724313540.git.christophe.leroy@csgroup.eu Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc/8xx: Fix kernel vs user address comparisonChristophe Leroy2024-10-041-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 65a82e117ffeeab0baf6f871a1cab11a28ace183 ] Since commit 9132a2e82adc ("powerpc/8xx: Define a MODULE area below kernel text"), module exec space is below PAGE_OFFSET so not only space above PAGE_OFFSET, but space above TASK_SIZE need to be seen as kernel space. Until now the problem went undetected because by default TASK_SIZE is 0x8000000 which means address space is determined by just checking upper address bit. But when TASK_SIZE is over 0x80000000, PAGE_OFFSET is used for comparison, leading to thinking module addresses are part of user space. Fix it by using TASK_SIZE instead of PAGE_OFFSET for address comparison. Fixes: 9132a2e82adc ("powerpc/8xx: Define a MODULE area below kernel text") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/3f574c9845ff0a023b46cb4f38d2c45aecd769bd.1724173828.git.christophe.leroy@csgroup.eu Signed-off-by: Sasha Levin <sashal@kernel.org>
* powerpc/8xx: Fix initial memory mappingChristophe Leroy2024-10-041-2/+2
| | | | | | | | | | | | | | | | | | | [ Upstream commit f9f2bff64c2f0dbee57be3d8c2741357ad3d05e6 ] Commit cf209951fa7f ("powerpc/8xx: Map linear memory with huge pages") introduced an initial mapping of kernel TEXT using PAGE_KERNEL_TEXT, but the pages that contain kernel TEXT may also contain kernel RODATA, and depending on selected debug options PAGE_KERNEL_TEXT may be either RWX or ROX. RODATA must be writable during init because it also contains ro_after_init data. So use PAGE_KERNEL_X instead to be sure it is RWX. Fixes: cf209951fa7f ("powerpc/8xx: Map linear memory with huge pages") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/dac7a828d8497c4548c91840575a706657baa4f1.1724173828.git.christophe.leroy@csgroup.eu Signed-off-by: Sasha Levin <sashal@kernel.org>
* m68k: Fix kernel_clone_args.flags in m68k_clone()Finn Thain2024-10-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 09b3d870faa7bc3e96c0978ab3cf4e96e4b15571 ] Stan Johnson recently reported a failure from the 'dump' command: DUMP: Date of this level 0 dump: Fri Aug 9 23:37:15 2024 DUMP: Dumping /dev/sda (an unlisted file system) to /dev/null DUMP: Label: none DUMP: Writing 10 Kilobyte records DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 3595695 blocks. DUMP: Context save fork fails in parent 671 The dump program uses the clone syscall with the CLONE_IO flag, that is, flags == 0x80000000. When that value is promoted from long int to u64 by m68k_clone(), it undergoes sign-extension. The new value includes CLONE_INTO_CGROUP so the validation in cgroup_css_set_fork() fails and the syscall returns -EBADF. Avoid sign-extension by casting to u32. Reported-by: Stan Johnson <userm57@yahoo.com> Closes: https://lists.debian.org/debian-68k/2024/08/msg00000.html Fixes: 6aabc1facdb2 ("m68k: Implement copy_thread_tls()") Signed-off-by: Finn Thain <fthain@linux-m68k.org> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Link: https://lore.kernel.org/3463f1e5d4e95468dc9f3368f2b78ffa7b72199b.1723335149.git.fthain@linux-m68k.org Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* x86/boot/64: Strip percpu address space when setting up GDT descriptorsUros Bizjak2024-10-041-1/+2
| | | | | | | | | | | | | | | | | | | | | [ Upstream commit b51207dc02ec3aeaa849e419f79055d7334845b6 ] init_per_cpu_var() returns a pointer in the percpu address space while rip_rel_ptr() expects a pointer in the generic address space. When strict address space checks are enabled, GCC's named address space checks fail: asm.h:124:63: error: passing argument 1 of 'rip_rel_ptr' from pointer to non-enclosed address space Add a explicit cast to remove address space of the returned pointer. Fixes: 11e36b0f7c21 ("x86/boot/64: Load the final kernel GDT during early boot directly, remove startup_gdt[]") Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20240819083334.148536-1-ubizjak@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
* x86/mm: Use IPIs to synchronize LAM enablementYosry Ahmed2024-10-042-7/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 3b299b99556c1753923f8d9bbd9304bcd139282f ] LAM can only be enabled when a process is single-threaded. But _kernel_ threads can temporarily use a single-threaded process's mm. If LAM is enabled by a userspace process while a kthread is using its mm, the kthread will not observe LAM enablement (i.e. LAM will be disabled in CR3). This could be fine for the kthread itself, as LAM only affects userspace addresses. However, if the kthread context switches to a thread in the same userspace process, CR3 may or may not be updated because the mm_struct doesn't change (based on pending TLB flushes). If CR3 is not updated, the userspace thread will run incorrectly with LAM disabled, which may cause page faults when using tagged addresses. Example scenario: CPU 1 CPU 2 /* kthread */ kthread_use_mm() /* user thread */ prctl_enable_tagged_addr() /* LAM enabled on CPU 2 */ /* LAM disabled on CPU 1 */ context_switch() /* to CPU 1 */ /* Switching to user thread */ switch_mm_irqs_off() /* CR3 not updated */ /* LAM is still disabled on CPU 1 */ Synchronize LAM enablement by sending an IPI to all CPUs running with the mm_struct to enable LAM. This makes sure LAM is enabled on CPU 1 in the above scenario before prctl_enable_tagged_addr() returns and userspace starts using tagged addresses, and before it's possible to run the userspace process on CPU 1. In switch_mm_irqs_off(), move reading the LAM mask until after mm_cpumask() is updated. This ensures that if an outdated LAM mask is written to CR3, an IPI is received to update it right after IRQs are re-enabled. [ dhansen: Add a LAM enabling helper and comment it ] Fixes: 82721d8b25d7 ("x86/mm: Handle LAM on context switch") Suggested-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Link: https://lore.kernel.org/all/20240702132139.3332013-2-yosryahmed%40google.com Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: mediatek: mt8195: Correct clock order for dp_intf*Chen-Yu Tsai2024-10-041-6/+6
| | | | | | | | | | | | | | [ Upstream commit 51bc68debab9e30b50c6352315950f3cfc309b32 ] The clocks for dp_intf* device nodes are given in the wrong order, causing the binding validation to fail. Fixes: 6c2503b5856a ("arm64: dts: mt8195: Add dp-intf nodes") Signed-off-by: Chen-Yu Tsai <wenst@chromium.org> Reviewed-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> Link: https://lore.kernel.org/r/20240802070951.1086616-1-wenst@chromium.org Signed-off-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* ARM: versatile: fix OF node leak in CPUs prepareKrzysztof Kozlowski2024-10-041-0/+1
| | | | | | | | | | | | | | [ Upstream commit f2642d97f2105ed17b2ece0c597450f2ff95d704 ] Machine code is leaking OF node reference from of_find_matching_node() in realview_smp_prepare_cpus(). Fixes: 5420b4b15617 ("ARM: realview: add an DT SMP boot method") Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Acked-by: Liviu Dudau <liviu.dudau@arm.com> Link: https://lore.kernel.org/20240826054934.10724-1-krzysztof.kozlowski@linaro.org Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: ti: k3-am654-idk: Fix dtbs_check warning in ICSSG dmasMD Danish Anwar2024-10-041-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 2bea7920da8001172f54359395700616269ccb70 ] ICSSG doesn't use mgmnt rsp dmas. But these are added in the dmas for icssg1-eth and icssg0-eth node. These mgmnt rsp dmas result in below dtbs_check warnings. /workdir/arch/arm64/boot/dts/ti/k3-am654-idk.dtb: icssg1-eth: dmas: [[39, 49664], [39, 49665], [39, 49666], [39, 49667], [39, 49668], [39, 49669], [39, 49670], [39, 49671], [39, 16896], [39, 16897], [39, 16898], [39, 16899]] is too long from schema $id: http://devicetree.org/schemas/net/ti,icssg-prueth.yaml# /workdir/arch/arm64/boot/dts/ti/k3-am654-idk.dtb: icssg0-eth: dmas: [[39, 49408], [39, 49409], [39, 49410], [39, 49411], [39, 49412], [39, 49413], [39, 49414], [39, 49415], [39, 16640], [39, 16641], [39, 16642], [39, 16643]] is too long from schema $id: http://devicetree.org/schemas/net/ti,icssg-prueth.yaml# Fix these warnings by removing mgmnt rsp dmas from icssg1-eth and icssg0-eth nodes. Fixes: a4d5bc3214eb ("arm64: dts: ti: k3-am654-idk: Add ICSSG Ethernet ports") Signed-off-by: MD Danish Anwar <danishanwar@ti.com> Reviewed-by: Roger Quadros <rogerq@kernel.org> Link: https://lore.kernel.org/r/20240830111000.232028-1-danishanwar@ti.com Signed-off-by: Nishanth Menon <nm@ti.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* ARM: dts: imx7d-zii-rmu2: fix Ethernet PHY pinctrl propertyKrzysztof Kozlowski2024-10-041-1/+1
| | | | | | | | | | | | | | [ Upstream commit 0e49cfe364dea4345551516eb2fe53135a10432b ] There is no "fsl,phy" property in pin controller pincfg nodes: imx7d-zii-rmu2.dtb: pinctrl@302c0000: enet1phyinterruptgrp: 'fsl,pins' is a required property imx7d-zii-rmu2.dtb: pinctrl@302c0000: enet1phyinterruptgrp: 'fsl,phy' does not match any of the regexes: 'pinctrl-[0-9]+' Fixes: f496e6750083 ("ARM: dts: Add ZII support for ZII i.MX7 RMU2 board") Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* ARM: dts: microchip: sama7g5: Fix RTT clockClaudiu Beznea2024-10-041-1/+1
| | | | | | | | | | | | | [ Upstream commit 867bf1923200e6ad82bad0289f43bf20b4ac7ff9 ] According to datasheet, Chapter 34. Clock Generator, section 34.2, Embedded characteristics, source clock for RTT is the TD_SLCK, registered with ID 1 by the slow clock controller driver. Fix RTT clock. Fixes: 7540629e2fc7 ("ARM: dts: at91: add sama7g5 SoC DT and sama7g5-ek") Link: https://lore.kernel.org/r/20240826165320.3068359-1-claudiu.beznea@tuxon.dev Signed-off-by: Claudiu Beznea <claudiu.beznea@tuxon.dev> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: qcom: x1e80100: Fix PHY for DP2Abel Vesa2024-10-041-5/+5
| | | | | | | | | | | | | | | [ Upstream commit ba728bda663b0e812cb20450d18af5d0edd803a2 ] The actual PHY used by MDSS DP2 is the USB SS2 QMP one. So switch to it instead. This is needed to get external DP support on boards like CRD where the 3rd Type-C USB port (right-hand side) is connected to DP2. Fixes: 1940c25eaa63 ("arm64: dts: qcom: x1e80100: Add display nodes") Signed-off-by: Abel Vesa <abel.vesa@linaro.org> Reviewed-by: Konrad Dybcio <konradybcio@kernel.org> Link: https://lore.kernel.org/r/20240829-x1e80100-dts-dp2-use-qmpphy-ss2-v1-1-9ba3dca61ccc@linaro.org Signed-off-by: Bjorn Andersson <andersson@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: ti: k3-j721e-beagleboneai64: Fix reversed C6x carveout locationsAndrew Davis2024-10-041-2/+2
| | | | | | | | | | | | | | | | [ Upstream commit 1a314099b7559690fe23cdf3300dfff6e830ecb1 ] The DMA carveout for the C6x core 0 is at 0xa6000000 and core 1 is at 0xa7000000. These are reversed in DT. While both C6x can access either region, so this is not normally a problem, but if we start restricting the memory each core can access (such as with firewalls) the cores accessing the regions for the wrong core will not work. Fix this here. Fixes: fae14a1cb8dd ("arm64: dts: ti: Add k3-j721e-beagleboneai64") Signed-off-by: Andrew Davis <afd@ti.com> Link: https://lore.kernel.org/r/20240801181232.55027-2-afd@ti.com Signed-off-by: Nishanth Menon <nm@ti.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: ti: k3-j721e-sk: Fix reversed C6x carveout locationsAndrew Davis2024-10-041-2/+2
| | | | | | | | | | | | | | | | [ Upstream commit 9f3814a7c06b7c7296cf8c1622078ad71820454b ] The DMA carveout for the C6x core 0 is at 0xa6000000 and core 1 is at 0xa7000000. These are reversed in DT. While both C6x can access either region, so this is not normally a problem, but if we start restricting the memory each core can access (such as with firewalls) the cores accessing the regions for the wrong core will not work. Fix this here. Fixes: f46d16cf5b43 ("arm64: dts: ti: k3-j721e-sk: Add DDR carveout memory nodes") Signed-off-by: Andrew Davis <afd@ti.com> Link: https://lore.kernel.org/r/20240801181232.55027-1-afd@ti.com Signed-off-by: Nishanth Menon <nm@ti.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: rockchip: Correct vendor prefix for Hardkernel ODROID-M1Jonas Karlman2024-10-041-1/+1
| | | | | | | | | | | | | | | [ Upstream commit 735065e774dcfc62e38df01a535862138b6c92ed ] The vendor prefix for Hardkernel ODROID-M1 is incorrectly listed as rockchip. Use the proper hardkernel vendor prefix for this board, while at it also drop the redundant soc prefix. Fixes: fd3583267703 ("arm64: dts: rockchip: Add Hardkernel ODROID-M1 board") Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Jonas Karlman <jonas@kwiboo.se> Link: https://lore.kernel.org/r/20240827211825.1419820-3-jonas@kwiboo.se Signed-off-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: tegra: Correct location of power-sensors for IGX OrinJon Hunter2024-10-042-33/+33
| | | | | | | | | | | | [ Upstream commit b93679b8f165467e1584f9b23055db83f45c32ce ] The power-sensors are located on the carrier board and not the module board and so update the IGX Orin device-tree files to fix this. Fixes: 9152ed09309d ("arm64: tegra: Add power-sensors for Tegra234 boards") Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* ARM: dts: microchip: sam9x60: Fix rtc/rtt clocksAlexander Dahl2024-10-041-2/+2
| | | | | | | | | | | | | | | | | | | | | [ Upstream commit d355c895fa4ddd8bec15569eee540baeed7df8c5 ] The RTC and RTT peripherals use the timing domain slow clock (TD_SLCK), sourced from the 32.768 kHz crystal oscillator or slow rc oscillator. The previously used Monitoring domain slow clock (MD_SLCK) is sourced from an internal RC oscillator which is most probably not precise enough for real time clock purposes. Fixes: 1e5f532c2737 ("ARM: dts: at91: sam9x60: add device tree for soc and board") Fixes: 5f6b33f46346 ("ARM: dts: sam9x60: add rtt") Signed-off-by: Alexander Dahl <ada@thorsis.com> Link: https://lore.kernel.org/r/20240821055136.6858-1-ada@thorsis.com [claudiu.beznea: removed () around the last commit description paragraph, removed " in front of "timing domain slow clock", described that TD_SLCK can also be sourced from slow rc oscillator] Signed-off-by: Claudiu Beznea <claudiu.beznea@tuxon.dev> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: renesas: r9a07g044: Correct GICD and GICR sizesLad Prabhakar2024-10-041-2/+2
| | | | | | | | | | | | | | [ Upstream commit 833948fb2b63155847ab691a54800f801555429b ] The RZ/G2L(C) SoC is equipped with the GIC-600. The GICD is 64KiB + 64KiB for the MBI alias (in total 128KiB), and the GICR is 128KiB per CPU. Fixes: 68a45525297b2 ("arm64: dts: renesas: Add initial DTSI for RZ/G2{L,LC} SoC's") Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/20240730122436.350013-5-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: renesas: r9a07g054: Correct GICD and GICR sizesLad Prabhakar2024-10-041-2/+2
| | | | | | | | | | | | | [ Upstream commit 45afa9eacb59b258d2e53c7f63430ea1e8344803 ] The RZ/V2L SoC is equipped with the GIC-600. The GICD is 64KiB + 64KiB for the MBI alias (in total 128KiB), and the GICR is 128KiB per CPU. Fixes: 7c2b8198f4f32 ("arm64: dts: renesas: Add initial DTSI for RZ/V2L SoC") Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/20240730122436.350013-4-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: renesas: r9a07g043u: Correct GICD and GICR sizesLad Prabhakar2024-10-041-2/+2
| | | | | | | | | | | | | | | [ Upstream commit ab39547f739236e7f16b8b0a51fdca95cc9cadd3 ] The RZ/G2UL SoC is equipped with the GIC-600. The GICD is 64KiB + 64KiB for the MBI alias (in total 128KiB), and the GICR is 128KiB per CPU. Despite the RZ/G2UL SoC being single-core, it has two instances of GICR. Fixes: cf40c9689e510 ("arm64: dts: renesas: Add initial DTSI for RZ/G2UL SoC") Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/20240730122436.350013-3-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: renesas: r9a08g045: Correct GICD and GICR sizesLad Prabhakar2024-10-041-2/+2
| | | | | | | | | | | | | | | [ Upstream commit ec9532628eb9d82282b8e52fd9c4a3800d87feec ] The RZ/G3S SoC is equipped with the GIC-600. The GICD is 64KiB + 64KiB for the MBI alias (in total 128KiB), and the GICR is 128KiB per CPU. Despite the RZ/G3S SoC being single-core, it has two instances of GICR. Fixes: e20396d65b959 ("arm64: dts: renesas: Add initial DTSI for RZ/G3S SoC") Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/20240730122436.350013-2-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: mediatek: mt8186: Fix supported-hw mask for GPU OPPsAngeloGioacchino Del Regno2024-10-041-6/+6
| | | | | | | | | | | | | | | | | | | [ Upstream commit 2317d018b835842df0501d8f9e9efa843068a101 ] The speedbin eFuse reads a value 'x' from 0 to 7 and, in order to make that compatible with opp-supported-hw, it gets post processed as BIT(x). Change all of the 0x30 supported-hw to 0x20 to avoid getting duplicate OPPs for speedbin 4, and also change all of the 0x8 to 0xcf because speedbins different from 4 and 5 do support 900MHz, 950MHz, 1000MHz with the higher voltage of 850mV, 900mV, 950mV respectively. Fixes: f38ea593ad0d ("arm64: dts: mediatek: mt8186: Wire up GPU voltage/frequency scaling") Link: https://lore.kernel.org/r/20240725072243.173104-1-angelogioacchino.delregno@collabora.com Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: dts: exynos: exynos7885-jackpotlte: Correct RAM amount to 4GBDavid Virag2024-10-041-1/+1
| | | | | | | | | | | | | | [ Upstream commit d281814b8f7a710a75258da883fb0dfe1329c031 ] All known jackpotlte variants have 4GB of RAM, let's use it all. RAM was set to 3GB from a mistake in the vendor provided DTS file. Fixes: 06874015327b ("arm64: dts: exynos: Add initial device tree support for Exynos7885 SoC") Signed-off-by: David Virag <virag.david003@gmail.com> Reviewed-by: Sam Protsenko <semen.protsenko@linaro.org> Link: https://lore.kernel.org/r/20240713180607.147942-3-virag.david003@gmail.com Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* x86/sgx: Fix deadlock in SGX NUMA node searchAaron Lu2024-10-041-13/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 9c936844010466535bd46ea4ce4656ef17653644 ] When the current node doesn't have an EPC section configured by firmware and all other EPC sections are used up, CPU can get stuck inside the while loop that looks for an available EPC page from remote nodes indefinitely, leading to a soft lockup. Note how nid_of_current will never be equal to nid in that while loop because nid_of_current is not set in sgx_numa_mask. Also worth mentioning is that it's perfectly fine for the firmware not to setup an EPC section on a node. While setting up an EPC section on each node can enhance performance, it is not a requirement for functionality. Rework the loop to start and end on *a* node that has SGX memory. This avoids the deadlock looking for the current SGX-lacking node to show up in the loop when it never will. Fixes: 901ddbb9ecf5 ("x86/sgx: Add a basic NUMA allocation scheme to sgx_alloc_epc_page()") Reported-by: "Molina Sabido, Gerardo" <gerardo.molina.sabido@intel.com> Signed-off-by: Aaron Lu <aaron.lu@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Kai Huang <kai.huang@intel.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Tested-by: Zhimin Luo <zhimin.luo@intel.com> Link: https://lore.kernel.org/all/20240905080855.1699814-2-aaron.lu%40intel.com Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: smp: smp_send_stop() and crash_smp_send_stop() should try non-NMI firstDouglas Anderson2024-10-041-63/+97
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit fdfa588124b6356cd08e5d3f0c3643c4ec3d6887 ] When testing hard lockup handling on my sc7180-trogdor-lazor device with pseudo-NMI enabled, with serial console enabled and with kgdb disabled, I found that the stack crawls printed to the serial console ended up as a jumbled mess. After rebooting, the pstore-based console looked fine though. Also, enabling kgdb to trap the panic made the console look fine and avoided the mess. After a bit of tracking down, I came to the conclusion that this was what was happening: 1. The panic path was stopping all other CPUs with panic_other_cpus_shutdown(). 2. At least one of those other CPUs was in the middle of printing to the serial console and holding the console port's lock, which is grabbed with "irqsave". ...but since we were stopping with an NMI we didn't care about the "irqsave" and interrupted anyway. 3. Since we stopped the CPU while it was holding the lock it would never release it. 4. All future calls to output to the console would end up failing to get the lock in qcom_geni_serial_console_write(). This isn't _totally_ unexpected at panic time but it's a code path that's not well tested, hard to get right, and apparently doesn't work terribly well on the Qualcomm geni serial driver. The Qualcomm geni serial driver was fixed to be a bit better in commit 9e957a155005 ("serial: qcom-geni: Don't cancel/abort if we can't get the port lock") but it's nice not to get into this situation in the first place. Taking a page from what x86 appears to do in native_stop_other_cpus(), do this: 1. First, try to stop other CPUs with a normal IPI and wait a second. This gives them a chance to leave critical sections. 2. If CPUs fail to stop then retry with an NMI, but give a much lower timeout since there's no good reason for a CPU not to react quickly to a NMI. This works well and avoids the corrupted console and (presumably) could help avoid other similar issues. In order to do this, we need to do a little re-organization of our IPIs since we don't have any more free IDs. Do what was suggested in previous conversations and combine "stop" and "crash stop". That frees up an IPI so now we can have a "stop" and "stop NMI". In order to do this we also need a slight change in the way we keep track of which CPUs still need to be stopped. We need to know specifically which CPUs haven't stopped yet when we fall back to NMI but in the "crash stop" case the "cpu_online_mask" isn't updated as CPUs go down. This is why that code path had an atomic of the number of CPUs left. Solve this by also updating the "cpu_online_mask" for crash stops. All of the above lets us combine the logic for "stop" and "crash stop" code, which appeared to have a bunch of arbitrary implementation differences. Aside from the above change where we try a normal IPI and then an NMI, the combined function has a few subtle differences: * In the normal smp_send_stop(), if we fail to stop one or more CPUs then we won't include the current CPU (the one running smp_send_stop()) in the error message. * In crash_smp_send_stop(), if we fail to stop some CPUs we'll print the CPUs that we failed to stop instead of printing all _but_ the current running CPU. * In crash_smp_send_stop(), we will now only print "SMP: stopping secondary CPUs" if (system_state <= SYSTEM_RUNNING). Fixes: d7402513c935 ("arm64: smp: IPI_CPU_STOP and IPI_CPU_CRASH_STOP should try for NMI") Signed-off-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/20240821145353.v3.1.Id4817adef610302554b8aa42b090d57270dc119c@changeid Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* ARM: 9410/1: vfp: Use asm volatile in fmrx/fmxr macrosCalvin Owens2024-10-041-22/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 89a906dfa8c3b21b3e5360f73c49234ac1eb885b ] Floating point instructions in userspace can crash some arm kernels built with clang/LLD 17.0.6: BUG: unsupported FP instruction in kernel mode FPEXC == 0xc0000780 Internal error: Oops - undefined instruction: 0 [#1] ARM CPU: 0 PID: 196 Comm: vfp-reproducer Not tainted 6.10.0 #1 Hardware name: BCM2835 PC is at vfp_support_entry+0xc8/0x2cc LR is at do_undefinstr+0xa8/0x250 pc : [<c0101d50>] lr : [<c010a80c>] psr: a0000013 sp : dc8d1f68 ip : 60000013 fp : bedea19c r10: ec532b17 r9 : 00000010 r8 : 0044766c r7 : c0000780 r6 : ec532b17 r5 : c1c13800 r4 : dc8d1fb0 r3 : c10072c4 r2 : c0101c88 r1 : ec532b17 r0 : 0044766c Flags: NzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none Control: 00c5387d Table: 0251c008 DAC: 00000051 Register r0 information: non-paged memory Register r1 information: vmalloc memory Register r2 information: non-slab/vmalloc memory Register r3 information: non-slab/vmalloc memory Register r4 information: 2-page vmalloc region Register r5 information: slab kmalloc-cg-2k Register r6 information: vmalloc memory Register r7 information: non-slab/vmalloc memory Register r8 information: non-paged memory Register r9 information: zero-size pointer Register r10 information: vmalloc memory Register r11 information: non-paged memory Register r12 information: non-paged memory Process vfp-reproducer (pid: 196, stack limit = 0x61aaaf8b) Stack: (0xdc8d1f68 to 0xdc8d2000) 1f60: 0000081f b6f69300 0000000f c10073f4 c10072c4 dc8d1fb0 1f80: ec532b17 0c532b17 0044766c b6f9ccd8 00000000 c010a80c 00447670 60000010 1fa0: ffffffff c1c13800 00c5387d c0100f10 b6f68af8 00448fc0 00000000 bedea188 1fc0: bedea314 00000001 00448ebc b6f9d000 00447608 b6f9ccd8 00000000 bedea19c 1fe0: bede9198 bedea188 b6e1061c 0044766c 60000010 ffffffff 00000000 00000000 Call trace: [<c0101d50>] (vfp_support_entry) from [<c010a80c>] (do_undefinstr+0xa8/0x250) [<c010a80c>] (do_undefinstr) from [<c0100f10>] (__und_usr+0x70/0x80) Exception stack(0xdc8d1fb0 to 0xdc8d1ff8) 1fa0: b6f68af8 00448fc0 00000000 bedea188 1fc0: bedea314 00000001 00448ebc b6f9d000 00447608 b6f9ccd8 00000000 bedea19c 1fe0: bede9198 bedea188 b6e1061c 0044766c 60000010 ffffffff Code: 0a000061 e3877202 e594003c e3a09010 (eef16a10) ---[ end trace 0000000000000000 ]--- Kernel panic - not syncing: Fatal exception in interrupt ---[ end Kernel panic - not syncing: Fatal exception in interrupt ]--- This is a minimal userspace reproducer on a Raspberry Pi Zero W: #include <stdio.h> #include <math.h> int main(void) { double v = 1.0; printf("%fn", NAN + *(volatile double *)&v); return 0; } Another way to consistently trigger the oops is: calvin@raspberry-pi-zero-w ~$ python -c "import json" The bug reproduces only when the kernel is built with DYNAMIC_DEBUG=n, because the pr_debug() calls act as barriers even when not activated. This is the output from the same kernel source built with the same compiler and DYNAMIC_DEBUG=y, where the userspace reproducer works as expected: VFP: bounce: trigger ec532b17 fpexc c0000780 VFP: emulate: INST=0xee377b06 SCR=0x00000000 VFP: bounce: trigger eef1fa10 fpexc c0000780 VFP: emulate: INST=0xeeb40b40 SCR=0x00000000 VFP: raising exceptions 30000000 calvin@raspberry-pi-zero-w ~$ ./vfp-reproducer nan Crudely grepping for vmsr/vmrs instructions in the otherwise nearly idential text for vfp_support_entry() makes the problem obvious: vmlinux.llvm.good [0xc0101cb8] <+48>: vmrs r7, fpexc vmlinux.llvm.good [0xc0101cd8] <+80>: vmsr fpexc, r0 vmlinux.llvm.good [0xc0101d20] <+152>: vmsr fpexc, r7 vmlinux.llvm.good [0xc0101d38] <+176>: vmrs r4, fpexc vmlinux.llvm.good [0xc0101d6c] <+228>: vmrs r0, fpscr vmlinux.llvm.good [0xc0101dc4] <+316>: vmsr fpexc, r0 vmlinux.llvm.good [0xc0101dc8] <+320>: vmrs r0, fpsid vmlinux.llvm.good [0xc0101dcc] <+324>: vmrs r6, fpscr vmlinux.llvm.good [0xc0101e10] <+392>: vmrs r10, fpinst vmlinux.llvm.good [0xc0101eb8] <+560>: vmrs r10, fpinst2 vmlinux.llvm.bad [0xc0101cb8] <+48>: vmrs r7, fpexc vmlinux.llvm.bad [0xc0101cd8] <+80>: vmsr fpexc, r0 vmlinux.llvm.bad [0xc0101d20] <+152>: vmsr fpexc, r7 vmlinux.llvm.bad [0xc0101d30] <+168>: vmrs r0, fpscr vmlinux.llvm.bad [0xc0101d50] <+200>: vmrs r6, fpscr <== BOOM! vmlinux.llvm.bad [0xc0101d6c] <+228>: vmsr fpexc, r0 vmlinux.llvm.bad [0xc0101d70] <+232>: vmrs r0, fpsid vmlinux.llvm.bad [0xc0101da4] <+284>: vmrs r10, fpinst vmlinux.llvm.bad [0xc0101df8] <+368>: vmrs r4, fpexc vmlinux.llvm.bad [0xc0101e5c] <+468>: vmrs r10, fpinst2 I think LLVM's reordering is valid as the code is currently written: the compiler doesn't know the instructions have side effects in hardware. Fix by using "asm volatile" in fmxr() and fmrx(), so they cannot be reordered with respect to each other. The original compiler now produces working kernels on my hardware with DYNAMIC_DEBUG=n. This is the relevant piece of the diff of the vfp_support_entry() text, from the original oopsing kernel to a working kernel with this patch: vmrs r0, fpscr tst r0, #4096 bne 0xc0101d48 tst r0, #458752 beq 0xc0101ecc orr r7, r7, #536870912 ldr r0, [r4, #0x3c] mov r9, #16 -vmrs r6, fpscr orr r9, r9, #251658240 add r0, r0, #4 str r0, [r4, #0x3c] mvn r0, #159 sub r0, r0, #-1207959552 and r0, r7, r0 vmsr fpexc, r0 vmrs r0, fpsid +vmrs r6, fpscr and r0, r0, #983040 cmp r0, #65536 bne 0xc0101d88 Fixes: 4708fb041346 ("ARM: vfp: Reimplement VFP exception entry in C code") Signed-off-by: Calvin Owens <calvin@wbinvd.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
* RISC-V: KVM: Fix to allow hpmcounter31 from the guestAtish Patra2024-10-041-3/+3
| | | | | | | | | | | | | | [ Upstream commit 5aa09297a3dcc798d038bd7436f8c90f664045a6 ] The csr_fun defines a count parameter which defines the total number CSRs emulated in KVM starting from the base. This value should be equal to total number of counters possible for trap/emulation (32). Fixes: a9ac6c37521f ("RISC-V: KVM: Implement trap & emulate for hpmcounters") Signed-off-by: Atish Patra <atishp@rivosinc.com> Link: https://lore.kernel.org/r/20240816-kvm_pmu_fixes-v1-2-cdfce386dd93@rivosinc.com Signed-off-by: Anup Patel <anup@brainfault.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* RISC-V: KVM: Allow legacy PMU access from guestAtish Patra2024-10-041-1/+14
| | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 7d1ffc8b087e97dbe1985912c7a2d00e53cea169 ] Currently, KVM traps & emulates PMU counter access only if SBI PMU is available as the guest can only configure/read PMU counters via SBI only. However, if SBI PMU is not enabled in the host, the guest will fallback to the legacy PMU which will try to access cycle/instret and result in an illegal instruction trap which is not desired. KVM can allow dummy emulation of cycle/instret only for the guest if SBI PMU is not enabled in the host. The dummy emulation will still return zero as we don't to expose the host counter values from a guest using legacy PMU. Fixes: a9ac6c37521f ("RISC-V: KVM: Implement trap & emulate for hpmcounters") Signed-off-by: Atish Patra <atishp@rivosinc.com> Link: https://lore.kernel.org/r/20240816-kvm_pmu_fixes-v1-1-cdfce386dd93@rivosinc.com Signed-off-by: Anup Patel <anup@brainfault.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* RISC-V: KVM: Don't zero-out PMU snapshot area before freeing dataAnup Patel2024-10-041-12/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 47d40d93292d9cff8dabb735bed83d930fa03950 ] With the latest Linux-6.11-rc3, the below NULL pointer crash is observed when SBI PMU snapshot is enabled for the guest and the guest is forcefully powered-off. Unable to handle kernel NULL pointer dereference at virtual address 0000000000000508 Oops [#1] Modules linked in: kvm CPU: 0 UID: 0 PID: 61 Comm: term-poll Not tainted 6.11.0-rc3-00018-g44d7178dd77a #3 Hardware name: riscv-virtio,qemu (DT) epc : __kvm_write_guest_page+0x94/0xa6 [kvm] ra : __kvm_write_guest_page+0x54/0xa6 [kvm] epc : ffffffff01590e98 ra : ffffffff01590e58 sp : ffff8f80001f39b0 gp : ffffffff81512a60 tp : ffffaf80024872c0 t0 : ffffaf800247e000 t1 : 00000000000007e0 t2 : 0000000000000000 s0 : ffff8f80001f39f0 s1 : 00007fff89ac4000 a0 : ffffffff015dd7e8 a1 : 0000000000000086 a2 : 0000000000000000 a3 : ffffaf8000000000 a4 : ffffaf80024882c0 a5 : 0000000000000000 a6 : ffffaf800328d780 a7 : 00000000000001cc s2 : ffffaf800197bd00 s3 : 00000000000828c4 s4 : ffffaf800248c000 s5 : ffffaf800247d000 s6 : 0000000000001000 s7 : 0000000000001000 s8 : 0000000000000000 s9 : 00007fff861fd500 s10: 0000000000000001 s11: 0000000000800000 t3 : 00000000000004d3 t4 : 00000000000004d3 t5 : ffffffff814126e0 t6 : ffffffff81412700 status: 0000000200000120 badaddr: 0000000000000508 cause: 000000000000000d [<ffffffff01590e98>] __kvm_write_guest_page+0x94/0xa6 [kvm] [<ffffffff015943a6>] kvm_vcpu_write_guest+0x56/0x90 [kvm] [<ffffffff015a175c>] kvm_pmu_clear_snapshot_area+0x42/0x7e [kvm] [<ffffffff015a1972>] kvm_riscv_vcpu_pmu_deinit.part.0+0xe0/0x14e [kvm] [<ffffffff015a2ad0>] kvm_riscv_vcpu_pmu_deinit+0x1a/0x24 [kvm] [<ffffffff0159b344>] kvm_arch_vcpu_destroy+0x28/0x4c [kvm] [<ffffffff0158e420>] kvm_destroy_vcpus+0x5a/0xda [kvm] [<ffffffff0159930c>] kvm_arch_destroy_vm+0x14/0x28 [kvm] [<ffffffff01593260>] kvm_destroy_vm+0x168/0x2a0 [kvm] [<ffffffff015933d4>] kvm_put_kvm+0x3c/0x58 [kvm] [<ffffffff01593412>] kvm_vm_release+0x22/0x2e [kvm] Clearly, the kvm_vcpu_write_guest() function is crashing because it is being called from kvm_pmu_clear_snapshot_area() upon guest tear down. To address the above issue, simplify the kvm_pmu_clear_snapshot_area() to not zero-out PMU snapshot area from kvm_pmu_clear_snapshot_area() because the guest is anyway being tore down. The kvm_pmu_clear_snapshot_area() is also called when guest changes PMU snapshot area of a VCPU but even in this case the previous PMU snaphsot area must not be zeroed-out because the guest might have reclaimed the pervious PMU snapshot area for some other purpose. Fixes: c2f41ddbcdd7 ("RISC-V: KVM: Implement SBI PMU Snapshot feature") Signed-off-by: Anup Patel <apatel@ventanamicro.com> Link: https://lore.kernel.org/r/20240815170907.2792229-1-apatel@ventanamicro.com Signed-off-by: Anup Patel <anup@brainfault.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* RISC-V: KVM: Fix sbiret init before forwarding to userspaceAndrew Jones2024-10-041-2/+2
| | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 6b7b282e6baea06ba65b55ae7d38326ceb79cebf ] When forwarding SBI calls to userspace ensure sbiret.error is initialized to SBI_ERR_NOT_SUPPORTED first, in case userspace neglects to set it to anything. If userspace neglects it then we can't be sure it did anything else either, so we just report it didn't do or try anything. Just init sbiret.value to zero, which is the preferred value to return when nothing special is specified. KVM was already initializing both sbiret.error and sbiret.value, but the values used appear to come from a copy+paste of the __sbi_ecall() implementation, i.e. a0 and a1, which don't apply prior to the call being executed, nor at all when forwarding to userspace. Fixes: dea8ee31a039 ("RISC-V: KVM: Add SBI v0.1 support") Signed-off-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20240807154943.150540-2-ajones@ventanamicro.com Signed-off-by: Anup Patel <anup@brainfault.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: signal: Fix some under-bracketed UAPI macrosDave Martin2024-10-041-3/+3
| | | | | | | | | | | | | | | | | | [ Upstream commit fc2220c9b15828319b09384e68399b4afc6276d9 ] A few SME-related sigcontext UAPI macros leave an argument unprotected from misparsing during macro expansion. Add parentheses around references to macro arguments where appropriate. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Fixes: ee072cf70804 ("arm64/sme: Implement signal handling for ZT") Fixes: 39782210eb7e ("arm64/sme: Implement ZA signal handling") Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240729152005.289844-1-Dave.Martin@arm.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* x86/hyperv: Set X86_FEATURE_TSC_KNOWN_FREQ when Hyper-V provides frequencyMichael Kelley2024-09-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 8fcc514809de41153b43ccbe1a0cdf7f72b78e7e ] A Linux guest on Hyper-V gets the TSC frequency from a synthetic MSR, if available. In this case, set X86_FEATURE_TSC_KNOWN_FREQ so that Linux doesn't unnecessarily do refined TSC calibration when setting up the TSC clocksource. With this change, a message such as this is no longer output during boot when the TSC is used as the clocksource: [ 1.115141] tsc: Refined TSC clocksource calibration: 2918.408 MHz Furthermore, the guest and host will have exactly the same view of the TSC frequency, which is important for features such as the TSC deadline timer that are emulated by the Hyper-V host. Signed-off-by: Michael Kelley <mhklinux@outlook.com> Reviewed-by: Roman Kisel <romank@linux.microsoft.com> Link: https://lore.kernel.org/r/20240606025559.1631-1-mhklinux@outlook.com Signed-off-by: Wei Liu <wei.liu@kernel.org> Message-ID: <20240606025559.1631-1-mhklinux@outlook.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* LoongArch: KVM: Invalidate guest steal time address on vCPU resetBibo Mao2024-09-303-9/+1
| | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 4956e07f05e239b274d042618a250c9fa3e92629 ] If ParaVirt steal time feature is enabled, there is a percpu gpa address passed from guest vCPU and host modifies guest memory space with this gpa address. When vCPU is reset normally, it will notify host and invalidate gpa address. However if VM is crashed and VMM reboots VM forcely, the vCPU reboot notification callback will not be called in VM. Host needs invalidate the gpa address, else host will modify guest memory during VM reboots. Here it is invalidated from the vCPU KVM_REG_LOONGARCH_VCPU_RESET ioctl interface. Also funciton kvm_reset_timer() is removed at vCPU reset stage, since SW emulated timer is only used in vCPU block state. When a vCPU is removed from the block waiting queue, kvm_restore_timer() is called and SW timer is cancelled. And the timer register is also cleared at VMM when a vCPU is reset. Signed-off-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Sasha Levin <sashal@kernel.org>