summaryrefslogtreecommitdiffstats
path: root/arch/arm/kernel
Commit message (Collapse)AuthorAgeFilesLines
...
| * | | ARM: 9184/1: return_address: disable again for CONFIG_ARM_UNWIND=yArd Biesheuvel2022-03-071-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 41918ec82eb6 ("ARM: ftrace: enable the graph tracer with the EABI unwinder") removed the dummy version of return_address() that was provided for the CONFIG_ARM_UNWIND=y case, on the assumption that the removal of the kernel_text_address() call from unwind_frame() in the preceding patch made it safe to do so. However, this turns out not to be the case: Corentin reports warnings about suspicious RCU usage and other strange behavior that seems to originate in the stack unwinding that occurs in return_address(). Given that the function graph tracer (which is what these changes were enabling for CONFIG_ARM_UNWIND=y builds) does not appear to care about this distinction, let's revert return_address() to the old state. Cc: Corentin Labbe <clabbe.montjoie@gmail.com> Fixes: 41918ec82eb6 ("ARM: ftrace: enable the graph tracer with the EABI unwinder") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reported-by: Corentin Labbe <clabbe.montjoie@gmail.com> Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
| * | | ARM: 9183/1: unwind: avoid spurious warnings on bogus code addressesArd Biesheuvel2022-03-071-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Corentin reports that since commit 538b9265c063 ("ARM: unwind: track location of LR value in stack frame"), numerous spurious warnings are emitted into the kernel log: [ 0.000000] unwind: Index not found c0f0c440 [ 0.000000] unwind: Index not found 00000000 [ 0.000000] unwind: Index not found c0f0c440 [ 0.000000] unwind: Index not found 00000000 This is due to the fact that the commit in question removes a check whether the PC value in the unwound frame is actually a kernel text address, on the assumption that such an address will not be associated with valid unwind data to begin with, which is checked right after. The reason for removing this check was that unwind_frame() will be called by the ftrace graph tracer code, which means that it can no longer be safely instrumented itself, or any code that it calls, as it could cause infinite recursion. In order to prevent the spurious diagnostics, let's add back the call to kernel_text_address(), but this time, only call it if no unwind data could be found for the address in question. This is more efficient for the common successful case, and should avoid any unintended recursion, considering that kernel_text_address() will only be called if no unwind data was found. Cc: Corentin Labbe <clabbe.montjoie@gmail.com> Fixes: 538b9265c063 ("ARM: unwind: track location of LR value in stack frame") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
| * | | ARM: ftrace: enable the graph tracer with the EABI unwinderArd Biesheuvel2022-02-093-14/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Enable the function graph tracer in combination with the EABI unwinder, so that Thumb2 builds or Clang ARM builds can make use of it. This involves using the unwinder to locate the return address of an instrumented function on the stack, so that it can be overridden and made to refer to the ftrace handling routines that need to be called at function return. Given that for these builds, it is not guaranteed that the value of the link register is stored on the stack, fall back to the stack slot that will be used by the ftrace exit code to restore LR in the instrumented function's execution context. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
| * | | ARM: unwind: track location of LR value in stack frameArd Biesheuvel2022-02-092-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ftrace graph tracer needs to override the return address of an instrumented function, in order to install a hook that gets invoked when the function returns again. Currently, we only support this when building for ARM using GCC with frame pointers, as in this case, it is guaranteed that the function will reload LR from [FP, #-4] in all cases, and we can simply pass that address to the ftrace code. In order to support this for configurations that rely on the EABI unwinder, such as Thumb2 builds, make the unwinder keep track of the address from which LR was unwound, permitting ftrace to make use of this in a subsequent patch. Drop the call to is_kernel_text_address(), which is problematic in terms of ftrace recursion, given that it may be instrumented itself. The call is redundant anyway, as no unwind directives will be found unless the PC points to memory that is known to contain executable code. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
| * | | ARM: ftrace: enable HAVE_FUNCTION_GRAPH_FP_TESTArd Biesheuvel2022-02-092-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix the frame pointer handling in the function graph tracer entry and exit code so we can enable HAVE_FUNCTION_GRAPH_FP_TEST. Instead of using FP directly (which will have different values between the entry and exit pieces of the function graph tracer), use the value of SP at entry and exit, as we can derive the former value from the frame pointer. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
| * | | ARM: ftrace: avoid unnecessary literal loadsArd Biesheuvel2022-02-091-16/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid explicit literal loads and instead, use accessor macros that generate the optimal sequence depending on the architecture revision being targeted. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
| * | | ARM: ftrace: avoid redundant loads or clobbering IPArd Biesheuvel2022-02-091-29/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Tweak the ftrace return paths to avoid redundant loads of SP, as well as unnecessary clobbering of IP. This also fixes the inconsistency of using MOV to perform a function return, which is sub-optimal on recent micro-architectures but more importantly, does not perform an interworking return, unlike compiler generated function returns in Thumb2 builds. Let's fix this by popping PC from the stack like most ordinary code does. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
| * | | ARM: ftrace: use trampolines to keep .init.text in branching rangeArd Biesheuvel2022-02-092-3/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Kernel images that are large in comparison to the range of a direct branch may fail to work as expected with ftrace, as patching a direct branch to one of the core ftrace routines may not be possible from the .init.text section, if it is emitted too far away from the normal .text section. This is more likely to affect Thumb2 builds, given that its range is only -/+ 16 MiB (as opposed to ARM which has -/+ 32 MiB), but may occur in either ISA. To work around this, add a couple of trampolines to .init.text and swap these in when the ftrace patching code is operating on callers in .init.text. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
| * | | ARM: ftrace: use ADD not POP to counter PUSH at entryArd Biesheuvel2022-02-092-3/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The compiler emitted hook used for ftrace consists of a PUSH {LR} to preserve the link register, followed by a branch-and-link (BL) to __gnu_mount_nc. Dynamic ftrace patches away the latter to turn the combined sequence into a NOP, using a POP {LR} instruction. This is not necessary, since the link register does not get clobbered in this case, and simply adding #4 to the stack pointer is sufficient, and avoids a memory access that may take a few cycles to resolve depending on the micro-architecture. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
| * | | ARM: ftrace: ensure that ADR takes the Thumb bit into accountArd Biesheuvel2022-02-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using ADR to take the address of 'ftrace_stub' via a local label produces an address that has the Thumb bit cleared, which means the subsequent comparison is guaranteed to fail. Instead, use the badr macro, which forces the Thumb bit to be set. Fixes: a3ba87a61499 ("ARM: 6316/1: ftrace: add Thumb-2 support") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
| * | | ARM: drop pointless SMP check on secondary startup pathArd Biesheuvel2022-01-251-5/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Only SMP systems use the secondary startup path by definition, so there is no need for SMP conditionals there. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * | | ARM: mm: make vmalloc_seq handling SMP safeArd Biesheuvel2022-01-251-18/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rework the vmalloc_seq handling so it can be used safely under SMP, as we started using it to ensure that vmap'ed stacks are guaranteed to be mapped by the active mm before switching to a task, and here we need to ensure that changes to the page tables are visible to other CPUs when they observe a change in the sequence count. Since LPAE needs none of this, fold a check against it into the vmalloc_seq counter check after breaking it out into a separate static inline helper. Given that vmap'ed stacks are now also supported on !SMP configurations, let's drop the WARN() that could potentially now fire spuriously. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * | | ARM: entry: avoid clobbering R9 in IRQ handlerArd Biesheuvel2022-01-251-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid using R9 in the IRQ handler code, as the entry code uses it for tsk, and expects it to remain untouched between the IRQ entry and exit code. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * | | ARM: smp: elide HWCAP_TLS checks or __entry_task updates on SMP+v6Ard Biesheuvel2022-01-251-10/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the SMP_ON_UP patching framework to elide HWCAP_TLS tests from the context switch and return to userspace code paths, as SMP systems are guaranteed to have this h/w capability. At the same time, omit the update of __entry_task if the system is detected to be UP at runtime, as in that case, the value is never used. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * | | ARM: assembler: define a Kconfig symbol for group relocation supportArd Biesheuvel2022-01-241-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nathan reports the group relocations go out of range in pathological cases such as allyesconfig kernels, which have little chance of actually booting but are still used in validation. So add a Kconfig symbol for this feature, and make it depend on !COMPILE_TEST. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * | | ARM: mm: switch to swapper_pg_dir early for vmap'ed stackArd Biesheuvel2022-01-242-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When onlining a CPU, switch to swapper_pg_dir as soon as possible so that it is guaranteed that the vmap'ed stack is mapped before it is used. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * | | ARM: v7m: enable support for IRQ stacksArd Biesheuvel2021-12-061-2/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Enable support for IRQ stacks on !MMU, and add the code to the IRQ entry path to switch to the IRQ stack if not running from it already. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: implement THREAD_INFO_IN_TASK for uniprocessor systemsArd Biesheuvel2021-12-067-16/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On UP systems, only a single task can be 'current' at the same time, which means we can use a global variable to track it. This means we can also enable THREAD_INFO_IN_TASK for those systems, as in that case, thread_info is accessed via current rather than the other way around, removing the need to store thread_info at the base of the task stack. This, in turn, permits us to enable IRQ stacks and vmap'ed stacks on UP systems as well. To partially mitigate the performance overhead of this arrangement, use a ADD/ADD/LDR sequence with the appropriate PC-relative group relocations to load the value of current when needed. This means that accessing current will still only require a single load as before, avoiding the need for a literal to carry the address of the global variable in each function. However, accessing thread_info will now require this load as well. Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: smp: defer TPIDRURO update for SMP v6 configurations tooArd Biesheuvel2021-12-061-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Defer TPIDURO updates for user space until exit also for CPU_V6+SMP configurations so that we can decide at runtime whether to use it to carry the current pointer, provided that we are running on a CPU that actually implements this register. This is needed for THREAD_INFO_IN_TASK support for UP systems, which requires that all SMP capable systems use the TPIDRURO based access to 'current' as the only remaining alternative will be a global variable which only works on UP. Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: percpu: add SMP_ON_UP supportArd Biesheuvel2021-12-062-16/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Permit the use of the TPIDRPRW system register for carrying the per-CPU offset in generic SMP configurations that also target non-SMP capable ARMv6 cores. This uses the SMP_ON_UP code patching framework to turn all TPIDRPRW accesses into reads/writes of entry #0 in the __per_cpu_offset array. While at it, switch over some existing direct TPIDRPRW accesses in asm code to invocations of a new helper that is patched in the same way when necessary. Note that CPU_V6+SMP without SMP_ON_UP results in a kernel that does not boot on v6 CPUs without SMP extensions, so add this dependency to Kconfig as well. Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: assembler: add optimized ldr/str macros to load variables from memoryArd Biesheuvel2021-12-062-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We will be adding variable loads to various hot paths, so it makes sense to add a helper macro that can load variables from asm code without the use of literal pool entries. On v7 or later, we can simply use MOVW/MOVT pairs, but on earlier cores, this requires a bit of hackery to emit a instruction sequence that implements this using a sequence of ADD/LDR instructions. Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: module: implement support for PC-relative group relocationsArd Biesheuvel2021-12-061-0/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for the R_ARM_ALU_PC_Gn_NC and R_ARM_LDR_PC_G2 group relocations [0] so we can use them in modules. These will be used to load the current task pointer from a global variable without having to rely on a literal pool entry to carry the address of this variable, which may have a significant negative impact on cache utilization for variables that are used often and in many different places, as each occurrence will result in a literal pool entry and therefore a line in the D-cache. [0] 'ELF for the ARM architecture' https://github.com/ARM-software/abi-aa/releases Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: entry: preserve thread_info pointer in switch_toArd Biesheuvel2021-12-061-8/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Tweak the UP stack protector handling code so that the thread info pointer is preserved in R7 until set_current is called. This is needed for a subsequent patch that implements THREAD_INFO_IN_TASK and set_current for UP as well. This also means we will prefer the per-task protector on UP systems that implement the thread ID registers, so tweak the preprocessor conditionals to reflect this. Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | irqchip: nvic: Use GENERIC_IRQ_MULTI_HANDLERVladimir Murzin2021-12-061-7/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather then restructuring the ARMv7M entrly logic per TODO, just move NVIC to GENERIC_IRQ_MULTI_HANDLER. Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: remove old-style irq entryArnd Bergmann2021-12-062-25/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The last user of arch_irq_handler_default is gone now, so the entry-macro-multi.S file and all references to mach/entry-macro.S can be removed, as well as the asm_do_IRQ() entrypoint into the interrupt handling routines implemented in C. Note: The ARMv7-M entry still uses its own top-level IRQ entry, calling nvic_handle_irq() from assembly. This could be changed to go through generic_handle_arch_irq() as well, but it's unclear to me if there are any benefits. Signed-off-by: Arnd Bergmann <arnd@arndb.de> [ardb: keep irq_handler macro as it carries all the IRQ stack handling] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
| * | | ARM: iop32x: use GENERIC_IRQ_MULTI_HANDLERArnd Bergmann2021-12-061-7/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | iop32x uses the entry-macro.S file for both the IRQ entry and for hooking into the arch_ret_to_user code path. This is done because the cp6 registers have to be enabled before accessing any of the interrupt controller registers but have to be disabled when running in user space. There is also a lazy-enable logic in cp6.c, but during a hardirq, we know it has to be enabled. Both the cp6-enable code and the code to read the IRQ status can be lifted into the normal generic_handle_arch_irq() path, but the cp6-disable code has to remain in the user return code. As nothing other than iop32x uses this hook, just open-code it there with an ifdef for the platform that can eventually be removed when iop32x has reached the end of its life. The cp6-enable path in the IRQ entry has an extra cp_wait barrier that the trap version does not have, but it is harmless to do it in both cases to simplify the logic here at the cost of a few extra cycles for the trap. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: implement support for vmap'ed stacksArd Biesheuvel2021-12-038-15/+231
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Wire up the generic support for managing task stack allocations via vmalloc, and implement the entry code that detects whether we faulted because of a stack overrun (or future stack overrun caused by pushing the pt_regs array) While this adds a fair amount of tricky entry asm code, it should be noted that it only adds a TST + branch to the svc_entry path. The code implementing the non-trivial handling of the overflow stack is emitted out-of-line into the .text section. Since on ARM, we rely on do_translation_fault() to keep PMD level page table entries that cover the vmalloc region up to date, we need to ensure that we don't hit such a stale PMD entry when accessing the stack. So we do a dummy read from the new stack while still running from the old one on the context switch path, and bump the vmalloc_seq counter when PMD level entries in the vmalloc range are modified, so that the MM switch fetches the latest version of the entries. Note that we need to increase the per-mode stack by 1 word, to gain some space to stash a GPR until we know it is safe to touch the stack. However, due to the cacheline alignment of the struct, this does not actually increase the memory footprint of the struct stack array at all. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Keith Packard <keithpac@amazon.com> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: entry: rework stack realignment code in svc_entryArd Biesheuvel2021-12-031-11/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original Thumb-2 enablement patches updated the stack realignment code in svc_entry to work around the lack of a STMIB instruction in Thumb-2, by subtracting 4 from the frame size, inverting the sense of the misaligment check, and changing to a STMIA instruction and a final stack push of a 4 byte quantity that results in the stack becoming aligned at the end of the sequence. It also pushes and pops R0 to the stack in order to have a temp register that Thumb-2 allows in general purpose ALU instructions, as TST using SP is not permitted. Both are a bit problematic for vmap'ed stacks, as using the stack is only permitted after we decide that we did not overflow the stack, or have already switched to the overflow stack. As for the alignment check: the current approach creates a corner case where, if the initial SUB of SP ends up right at the start of the stack, we will end up subtracting another 8 bytes and overflowing it. This means we would need to add the overflow check *after* the SUB that deliberately misaligns the stack. However, this would require us to keep local state (i.e., whether we performed the subtract or not) across the overflow check, but without any GPRs or stack available. So let's switch to an approach where we don't use the stack, and where the alignment check of the stack pointer occurs in the usual way, as this is guaranteed not to result in overflow. This means we will be able to do the overflow check first. While at it, switch to R1 so the mode stack pointer in R0 remains accessible. Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: switch_to: clean up Thumb2 code pathArd Biesheuvel2021-12-031-5/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The load-multiple instruction that essentially performs the switch_to operation in ARM mode, by loading all callee save registers as well the stack pointer and the program counter, is split into 3 separate loads for Thumb-2, with the IP register used as a temporary to capture the value of R4 before it gets overwritten. We can clean this up a bit, by sticking with a single LDMIA instruction, but one that pops SP and PC into IP and LR, respectively, and by using ordinary move register and branch instructions to get those values into SP and PC. This also allows us to move the set_current call closer to the assignment of SP, reducing the window where those are mutually out of sync. This is especially relevant for CONFIG_VMAP_STACK, which is being introduced in a subsequent patch, where we need to issue a load that might fault from the new stack while running from the old one, to ensure that stale PMD entries in the VMALLOC space are synced up. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Keith Packard <keithpac@amazon.com> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: unwind: disregard unwind info before stack frame is set upArd Biesheuvel2021-12-031-1/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When unwinding the stack from a stack overflow, we are likely to start from a stack push instruction, given that this is the most common way to grow the stack for compiler emitted code. This push instruction rarely appears anywhere else than at offset 0x0 of the function, and if it doesn't, the compiler tends to split up the unwind annotations, given that the stack frame layout is apparently not the same throughout the function. This means that, in the general case, if the frame's PC points at the first instruction covered by a certain unwind entry, there is no way the stack frame that the unwind entry describes could have been created yet, and so we are still on the stack frame of the caller in that case. So treat this as a special case, and return with the new PC taken from the frame's LR, without applying the unwind transformations to the virtual register set. This permits us to unwind the call stack on stack overflow when the overflow was caused by a stack push on function entry. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Keith Packard <keithpac@amazon.com> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: run softirqs on the per-CPU IRQ stackArd Biesheuvel2021-12-031-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we have enabled IRQ stacks, any softIRQs that are handled over the back of a hard IRQ will run from the IRQ stack as well. However, any synchronous softirq processing that happens when re-enabling softIRQs from task context will still execute on that task's stack. Since any call to local_bh_enable() at any level in the task's call stack may trigger a softIRQ processing run, which could potentially cause a task stack overflow if the combined stack footprints exceed the stack's size, let's run these synchronous invocations of do_softirq() on the IRQ stack as well. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Keith Packard <keithpac@amazon.com> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: implement IRQ stacksArd Biesheuvel2021-12-033-4/+88
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we no longer rely on the stack pointer to access the current task struct or thread info, we can implement support for IRQ stacks cleanly as well. Define a per-CPU IRQ stack and switch to this stack when taking an IRQ, provided that we were not already using that stack in the interrupted context. This is never the case for IRQs taken from user space, but ones taken while running in the kernel could fire while one taken from user space has not completed yet. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Keith Packard <keithpac@amazon.com> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: unwind: dump exception stack from calling frameArd Biesheuvel2021-12-032-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing code that dumps the contents of the pt_regs structure passed to __entry routines does so while unwinding the callee frame, and dereferences the stack pointer as a struct pt_regs*. This will no longer work when we enable support for IRQ or overflow stacks, because the struct pt_regs may live on the task stack, while we are executing from another stack. The unwinder has access to this information, but only while unwinding the calling frame. So let's combine the exception stack dumping code with the handling of the calling frame as well. By printing it before dumping the caller/callee addresses, the output order is preserved. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Keith Packard <keithpac@amazon.com> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: export dump_mem() to other objectsArd Biesheuvel2021-12-031-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The unwind info based stack unwinder will make its own call to dump_mem() to dump the exception stack, so give it external linkage. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Keith Packard <keithpac@amazon.com> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: unwind: support unwinding across multiple stacksArd Biesheuvel2021-12-031-9/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement support in the unwinder for dealing with multiple stacks. This will be needed once we add support for IRQ stacks, or for the overflow stack used by the vmap'ed stacks code. This involves tracking the unwind opcodes that either update the virtual stack pointer from another virtual register, or perform an explicit subtract on the virtual stack pointer, and updating the low and high bounds that we use to sanitize the stack pointer accordingly. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Keith Packard <keithpac@amazon.com> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
| * | | ARM: remove some dead codeArd Biesheuvel2021-12-031-5/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This code appears to be no longer used so let's get rid of it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Keith Packard <keithpac@amazon.com> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
* | | | ARM: fix Thumb2 regression with Spectre BHBRussell King (Oracle)2022-03-111-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When building for Thumb2, the vectors make use of a local label. Sadly, the Spectre BHB code also uses a local label with the same number which results in the Thumb2 reference pointing at the wrong place. Fix this by changing the number used for the Spectre BHB local label. Fixes: b9baf5c8c5c3 ("ARM: Spectre-BHB workaround") Tested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | ARM: fix build error when BPF_SYSCALL is disabledEmmanuel Gil Peyrot2022-03-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It was missing a semicolon. Signed-off-by: Emmanuel Gil Peyrot <linkmauve@linkmauve.fr> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Fixes: 25875aa71dfe ("ARM: include unprivileged BPF status in Spectre V2 reporting"). Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | Merge tag 'for-linus-bhb' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds2022-03-085-12/+229
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull ARM spectre fixes from Russell King: "ARM Spectre BHB mitigations. These patches add Spectre BHB migitations for the following Arm CPUs to the 32-bit ARM kernels: - Cortex A15 - Cortex A57 - Cortex A72 - Cortex A73 - Cortex A75 - Brahma B15 for CVE-2022-23960" * tag 'for-linus-bhb' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: include unprivileged BPF status in Spectre V2 reporting ARM: Spectre-BHB workaround ARM: use LOADADDR() to get load address of sections ARM: early traps initialisation ARM: report Spectre v2 status through sysfs
| * | | | ARM: include unprivileged BPF status in Spectre V2 reportingRussell King (Oracle)2022-03-081-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The mitigations for Spectre-BHB are only applied when an exception is taken, but when unprivileged BPF is enabled, userspace can load BPF programs that can be used to exploit the problem. When unprivileged BPF is enabled, report the vulnerable status via the spectre_v2 sysfs file. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
| * | | | ARM: Spectre-BHB workaroundRussell King (Oracle)2022-03-054-6/+139
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Workaround the Spectre BHB issues for Cortex-A15, Cortex-A57, Cortex-A72, Cortex-A73 and Cortex-A75. We also include Brahma B15 as well to be safe, which is affected by Spectre V2 in the same ways as Cortex-A15. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
| * | | | ARM: early traps initialisationRussell King (Oracle)2022-03-051-6/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Provide a couple of helpers to copy the vectors and stubs, and also to flush the copied vectors and stubs. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
| * | | | ARM: report Spectre v2 status through sysfsRussell King (Oracle)2022-03-052-0/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As per other architectures, add support for reporting the Spectre vulnerability status via sysfs CPU. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
* | | | | Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds2022-03-021-8/+28
|\ \ \ \ \ | |_|_|/ / |/| | | / | | |_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull ARM fixes from Russell King: - Fix kgdb breakpoint for Thumb2 - Fix dependency for BITREVERSE kconfig - Fix nommu early_params and __setup returns * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 9182/1: mmu: fix returns from early_param() and __setup() functions ARM: 9178/1: fix unmet dependency on BITREVERSE for HAVE_ARCH_BITREVERSE ARM: Fix kgdb breakpoint for Thumb2
| * | | ARM: Fix kgdb breakpoint for Thumb2Russell King (Oracle)2022-02-211-8/+28
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The kgdb code needs to register an undef hook for the Thumb UDF instruction that will fault in order to be functional on Thumb2 platforms. Reported-by: Johannes Stezenbach <js@sig21.net> Tested-by: Johannes Stezenbach <js@sig21.net> Fixes: 5cbad0ebf45c ("kgdb: support for ARCH=arm") Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
* | | proc: remove PDE_DATA() completelyMuchun Song2022-01-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove PDE_DATA() completely and replace it with pde_data(). [akpm@linux-foundation.org: fix naming clash in drivers/nubus/proc.c] [akpm@linux-foundation.org: now fix it properly] Link: https://lkml.kernel.org/r/20211124081956.87711-2-songmuchun@bytedance.com Signed-off-by: Muchun Song <songmuchun@bytedance.com> Acked-by: Christian Brauner <christian.brauner@ubuntu.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Alexey Gladkov <gladkov.alexey@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | Merge branch 'signal-for-v5.17' of ↵Linus Torvalds2022-01-171-1/+1
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace Pull signal/exit/ptrace updates from Eric Biederman: "This set of changes deletes some dead code, makes a lot of cleanups which hopefully make the code easier to follow, and fixes bugs found along the way. The end-game which I have not yet reached yet is for fatal signals that generate coredumps to be short-circuit deliverable from complete_signal, for force_siginfo_to_task not to require changing userspace configured signal delivery state, and for the ptrace stops to always happen in locations where we can guarantee on all architectures that the all of the registers are saved and available on the stack. Removal of profile_task_ext, profile_munmap, and profile_handoff_task are the big successes for dead code removal this round. A bunch of small bug fixes are included, as most of the issues reported were small enough that they would not affect bisection so I simply added the fixes and did not fold the fixes into the changes they were fixing. There was a bug that broke coredumps piped to systemd-coredump. I dropped the change that caused that bug and replaced it entirely with something much more restrained. Unfortunately that required some rebasing. Some successes after this set of changes: There are few enough calls to do_exit to audit in a reasonable amount of time. The lifetime of struct kthread now matches the lifetime of struct task, and the pointer to struct kthread is no longer stored in set_child_tid. The flag SIGNAL_GROUP_COREDUMP is removed. The field group_exit_task is removed. Issues where task->exit_code was examined with signal->group_exit_code should been examined were fixed. There are several loosely related changes included because I am cleaning up and if I don't include them they will probably get lost. The original postings of these changes can be found at: https://lkml.kernel.org/r/87a6ha4zsd.fsf@email.froward.int.ebiederm.org https://lkml.kernel.org/r/87bl1kunjj.fsf@email.froward.int.ebiederm.org https://lkml.kernel.org/r/87r19opkx1.fsf_-_@email.froward.int.ebiederm.org I trimmed back the last set of changes to only the obviously correct once. Simply because there was less time for review than I had hoped" * 'signal-for-v5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: (44 commits) ptrace/m68k: Stop open coding ptrace_report_syscall ptrace: Remove unused regs argument from ptrace_report_syscall ptrace: Remove second setting of PT_SEIZED in ptrace_attach taskstats: Cleanup the use of task->exit_code exit: Use the correct exit_code in /proc/<pid>/stat exit: Fix the exit_code for wait_task_zombie exit: Coredumps reach do_group_exit exit: Remove profile_handoff_task exit: Remove profile_task_exit & profile_munmap signal: clean up kernel-doc comments signal: Remove the helper signal_group_exit signal: Rename group_exit_task group_exec_task coredump: Stop setting signal->group_exit_task signal: Remove SIGNAL_GROUP_COREDUMP signal: During coredumps set SIGNAL_GROUP_EXIT in zap_process signal: Make coredump handling explicit in complete_signal signal: Have prepare_signal detect coredumps using signal->core_state signal: Have the oom killer detect coredumps using signal->core_state exit: Move force_uaccess back into do_exit exit: Guarantee make_task_dead leaks the tsk when calling do_task_exit ...
| * | | exit: Add and use make_task_dead.Eric W. Biederman2021-12-131-1/+1
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are two big uses of do_exit. The first is it's design use to be the guts of the exit(2) system call. The second use is to terminate a task after something catastrophic has happened like a NULL pointer in kernel code. Add a function make_task_dead that is initialy exactly the same as do_exit to cover the cases where do_exit is called to handle catastrophic failure. In time this can probably be reduced to just a light wrapper around do_task_dead. For now keep it exactly the same so that there will be no behavioral differences introducing this new concept. Replace all of the uses of do_exit that use it for catastraphic task cleanup with make_task_dead to make it clear what the code is doing. As part of this rename rewind_stack_do_exit rewind_stack_and_make_dead. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* | | Merge tag 'perf_core_for_v5.17_rc1' of ↵Linus Torvalds2022-01-121-24/+4
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf updates from Borislav Petkov: "Cleanup of the perf/kvm interaction." * tag 'perf_core_for_v5.17_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf: Drop guest callback (un)register stubs KVM: arm64: Drop perf.c and fold its tiny bits of code into arm.c KVM: arm64: Hide kvm_arm_pmu_available behind CONFIG_HW_PERF_EVENTS=y KVM: arm64: Convert to the generic perf callbacks KVM: x86: Move Intel Processor Trace interrupt handler to vmx.c KVM: Move x86's perf guest info callbacks to generic KVM KVM: x86: More precisely identify NMI from guest when handling PMI KVM: x86: Drop current_vcpu for kvm_running_vcpu + kvm_arch_vcpu variable perf/core: Use static_call to optimize perf_guest_info_callbacks perf: Force architectures to opt-in to guest callbacks perf: Add wrappers for invoking guest callbacks perf/core: Rework guest callbacks to prepare for static_call support perf: Drop dead and useless guest "support" from arm, csky, nds32 and riscv perf: Stop pretending that perf can handle multiple guest callbacks KVM: x86: Register Processor Trace interrupt hook iff PT enabled in guest KVM: x86: Register perf callbacks after calling vendor's hardware_setup() perf: Protect perf_guest_cbs with RCU
| * | | perf: Drop dead and useless guest "support" from arm, csky, nds32 and riscvSean Christopherson2021-11-171-29/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Drop "support" for guest callbacks from architectures that don't implement the guest callbacks. Future patches will convert the callbacks to static_call; rather than churn a bunch of arch code (that was presumably copy+pasted from x86), remove it wholesale as it's useless and at best wasting cycles. A future patch will also add a Kconfig to force architcture to opt into the callbacks to make it more difficult for uses "support" to sneak in in the future. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://lore.kernel.org/r/20211111020738.2512932-6-seanjc@google.com