summaryrefslogtreecommitdiffstats
path: root/arch
Commit message (Collapse)AuthorAgeFilesLines
* arm64: cpufeature: Fix handling of CTR_EL0.IDC fieldSuzuki K Poulose2018-10-164-5/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CTR_EL0.IDC reports the data cache clean requirements for instruction to data coherence. However, if the field is 0, we need to check the CLIDR_EL1 fields to detect the status of the feature. Currently we don't do this and generate a warning with tainting the kernel, when there is a mismatch in the field among the CPUs. Also the userspace doesn't have a reliable way to check the CLIDR_EL1 register to check the status. This patch fixes the problem by checking the CLIDR_EL1 fields, when (CTR_EL0.IDC == 0) and updates the kernel's copy of the CTR_EL0 for the CPU with the actual status of the feature. This would allow the sanity check infrastructure to do the proper checking of the fields and also allow the CTR_EL0 emulation code to supply the real status of the feature. Now, if a CPU has raw CTR_EL0.IDC == 0 and effective IDC == 1 (with overall system wide IDC == 1), we need to expose the real value to the user. So, we trap CTR_EL0 access on the CPU which reports incorrect CTR_EL0.IDC. Fixes: commit 6ae4b6e057888 ("arm64: Add support for new control bits CTR_EL0.DIC and CTR_EL0.IDC") Cc: Shanker Donthineni <shankerd@codeaurora.org> Cc: Philip Elcan <pelcan@codeaurora.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: cpufeature: ctr: Fix cpu capability check for late CPUsSuzuki K Poulose2018-10-161-4/+18
| | | | | | | | | | | | | | | | | | | | | | | The matches() routine for a capability must honor the "scope" passed to it and return the proper results. i.e, when passed with SCOPE_LOCAL_CPU, it should check the status of the capability on the current CPU. This is used by verify_local_cpu_capabilities() on a late secondary CPU to make sure that it's compliant with the established system features. However, ARM64_HAS_CACHE_{IDC/DIC} always checks the system wide registers and this could mean that a late secondary CPU could return "true" (since the CPU hasn't updated the system wide registers yet) and thus lead the system in an inconsistent state, where the system assumes it has IDC/DIC feature, while the new CPU doesn't. Fixes: commit 6ae4b6e0578886eb36 ("arm64: Add support for new control bits CTR_EL0.DIC and CTR_EL0.IDC") Cc: Philip Elcan <pelcan@codeaurora.org> Cc: Shanker Donthineni <shankerd@codeaurora.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: mm: Use __pa_symbol() for set_swapper_pgd()James Morse2018-10-101-1/+1
| | | | | | | | | | | | | | | | | | | | | commit 2330b7ca78350efcb ("arm64/mm: use fixmap to modify swapper_pg_dir") modifies the swapper_pg_dir via the fixmap as the kernel page tables have been moved to a read-only part of the kernel mapping. Using __pa() to setup the fixmap causes CONFIG_DEBUG_VIRTUAL to fire, as this function is used on the kernel-image swapper address. The in_swapper_pgdir() test before each call of this function means set_swapper_pgd() will only ever be called when pgdp points somewhere in the kernel-image mapping of swapper_pd_dir. Use __pa_symbol(). Reported-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Jun Yao <yaojun8558363@gmail.com> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* Revert "arm64: uaccess: implement unsafe accessors"James Morse2018-10-101-46/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit a1f33941f7e103bcf471eaf8461b212223c642d6. The unsafe accessors allow the PAN enable/disable calls to be made once for a group of accesses. Adding these means we can now have sequences that look like this: | user_access_begin(); | unsafe_put_user(static-value, x, err); | unsafe_put_user(helper-that-sleeps(), x, err); | user_access_end(); Calling schedule() without taking an exception doesn't switch the PSTATE or TTBRs. We can switch out of a uaccess-enabled region, and run other code with uaccess enabled for a different thread. We can also switch from uaccess-disabled code back into this region, meaning the unsafe_put_user()s will fault. For software-PAN, threads that do this will get stuck as handle_mm_fault() will determine the page has already been mapped in, but we fault again as the page tables aren't loaded. To solve this we need code in __switch_to() that save/restores the PAN state. Acked-by: Julien Thierry <julien.thierry@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: mm: Drop the unused cpu parameterShaokun Zhang2018-10-091-4/+4
| | | | | | | | Cpu parameter is never used in flush_context, remove it. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: mm: Use #ifdef for the __PAGETABLE_P?D_FOLDED definesJames Morse2018-10-051-2/+6
| | | | | | | | | | | __is_defined(__PAGETABLE_P?D_FOLDED) doesn't quite work as intended as these symbols are internal to asm-generic and aren't defined in the way kconfig expects. This makes them always evaluate to false. Switch to #ifdef. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Fix typo in a comment in arch/arm64/mm/kasan_init.cKyrylo Tkachov2018-10-051-1/+1
| | | | | | | | | | "bellow" -> "below" The recommendation from kegel.com/kerspell is to only fix the howlers. "Bellow" is a synonym of "howl" so this should be appropriate. Signed-off-by: Kyrylo Tkachov <kyrylo.tkachov@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: xen: Use existing helper to check interrupt statusJulien Thierry2018-10-031-1/+1
| | | | | | | | | | | The status of interrupts might depend on more than just pstate. Use interrupts_disabled() instead of raw_irqs_disabled_flags() to take the full context into account. Acked-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Use daifflag_restore after bp_hardeningJulien Thierry2018-10-031-2/+3
| | | | | | | | | | | | | | | | | For EL0 entries requiring bp_hardening, daif status is kept at DAIF_PROCCTX_NOIRQ until after hardening has been done. Then interrupts are enabled through local_irq_enable(). Before using local_irq_* functions, daifflags should be properly restored to a state where IRQs are enabled. Enable IRQs by restoring DAIF_PROCCTX state after bp hardening. Acked-by: James Morse <james.morse@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: daifflags: Use irqflags functions for daifflagsJulien Thierry2018-10-031-10/+5
| | | | | | | | | | | | | | | Some of the work done in daifflags save/restore is already provided by irqflags functions. Daifflags should always be a superset of irqflags (it handles irq status + status of other flags). Modifying behaviour of irqflags should alter the behaviour of daifflags. Use irqflags_save/restore functions for the corresponding daifflags operation. Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: arch_timer: avoid unused function warningArnd Bergmann2018-10-031-0/+1
| | | | | | | | | | | | | | | | | | arm64_1188873_read_cntvct_el0() is protected by the correct CONFIG_ARM64_ERRATUM_1188873 #ifdef, but the only reference to it is also inside of an CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND section, and causes a warning if that is disabled: drivers/clocksource/arm_arch_timer.c:323:20: error: 'arm64_1188873_read_cntvct_el0' defined but not used [-Werror=unused-function] Since the erratum requires that we always apply the workaround in the timer driver, select that symbol as we do for SoC specific errata. Fixes: 95b861a4a6d9 ("arm64: arch_timer: Add workaround for ARM erratum 1188873") Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Trap WFI executed in userspaceMarc Zyngier2018-10-014-2/+19
| | | | | | | | | | | | | It recently came to light that userspace can execute WFI, and that the arm64 kernel doesn't trap this event. This sounds rather benign, but the kernel should decide when it wants to wait for an interrupt, and not userspace. Let's trap WFI and immediately return after having skipped the instruction. This effectively makes WFI a rather expensive NOP. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/kprobes: remove an extra semicolon in arch_prepare_kprobezhong jiang2018-10-011-1/+1
| | | | | | | There is an extra semicolon in arch_prepare_kprobe, remove it. Signed-off-by: zhong jiang <zhongjiang@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/numa: Unify common error path in numa_init()Anshuman Khandual2018-10-011-4/+7
| | | | | | | | | | | | At present numa_free_distance() is being called before numa_distance is even initialized with numa_alloc_distance() which is really pointless. Instead lets call numa_free_distance() on the common error path inside numa_init() after numa_alloc_distance() has been successful. Fixes: 1a2db30034 ("arm64, numa: Add NUMA support for arm64 platforms") Acked-by: Punit Agrawal <punit.agrawal@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/numa: Report correct memblock range for the dummy nodeAnshuman Khandual2018-10-011-1/+1
| | | | | | | | | | | | The dummy node ID is marked into all memory ranges on the system. So the dummy node really extends the entire memblock.memory. Hence report correct extent information for the dummy node using memblock range helper functions instead of the range [0LLU, PFN_PHYS(max_pfn) - 1)]. Fixes: 1a2db30034 ("arm64, numa: Add NUMA support for arm64 platforms") Acked-by: Punit Agrawal <punit.agrawal@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/mm: Define esr_to_debug_fault_info()Anshuman Khandual2018-10-011-1/+7
| | | | | | | | | | | | | fault_info[] and debug_fault_info[] are static arrays defining memory abort exception handling functions looking into ESR fault status code encodings. As esr_to_fault_info() is already available providing fault_info[] array lookup, it really makes sense to have a corresponding debug_fault_info[] array lookup function as well. This just adds an equivalent helper function esr_to_debug_fault_info(). Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/mm: Reorganize arguments for is_el1_permission_fault()Anshuman Khandual2018-10-011-5/+4
| | | | | | | | | | | Most memory abort exception handling related functions have the arguments in the order (addr, esr, regs) except is_el1_permission_fault(). This changes the argument order in this function as (addr, esr, regs) like others. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/mm: Use ESR_ELx_FSC macro while decoding fault exceptionAnshuman Khandual2018-10-011-1/+1
| | | | | | | | | Just replace hard code value of 63 (0x111111) with an existing macro ESR_ELx_FSC when parsing for the status code during fault exception. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: arch_timer: Add workaround for ARM erratum 1188873Marc Zyngier2018-10-014-1/+24
| | | | | | | | | | | | When running on Cortex-A76, a timer access from an AArch32 EL0 task may end up with a corrupted value or register. The workaround for this is to trap these accesses at EL1/EL2 and execute them there. This only affects versions r0p0, r1p0 and r2p0 of the CPU. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: compat: Add CNTFRQ trap handlerMarc Zyngier2018-10-012-0/+16
| | | | | | | | | | | Just like CNTVCT, we need to handle userspace trapping into the kernel if we're decided that the timer wasn't fit for purpose... 64bit userspace is already dealt with, but we're missing the equivalent compat handling. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: compat: Add CNTVCT trap handlerMarc Zyngier2018-10-012-0/+19
| | | | | | | | | | | Since people seem to make a point in breaking the userspace visible counter, we have no choice but to trap the access. We already do this for 64bit userspace, but this is lacking for compat. Let's provide the required handler. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: compat: Add cp15_32 and cp15_64 handler arraysMarc Zyngier2018-10-011-0/+28
| | | | | | | | | | We're now ready to start handling CP15 access. Let's add (empty) arrays for both 32 and 64bit accessors, and the code that deals with them. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: compat: Add condition code checks and IT advanceMarc Zyngier2018-10-011-0/+85
| | | | | | | | | | | | Here's a /really nice/ part of the architecture: a CP15 access is allowed to trap even if it fails its condition check, and SW must handle it. This includes decoding the IT state if this happens in am IT block. As a consequence, SW must also deal with advancing the IT state machine. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: compat: Add separate CP15 trapping hookMarc Zyngier2018-10-012-2/+26
| | | | | | | | | | Instead of directly generating an UNDEF when trapping a CP15 access, let's add a new entry point to that effect (which only generates an UNDEF for now). Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Add decoding macros for CP15_32 and CP15_64 trapsMarc Zyngier2018-10-011-0/+52
| | | | | | | | | So far, we don't have anything to help decoding ESR_ELx when dealing with ESR_ELx_EC_CP15_{32,64}. As we're about to handle some of those, let's add some useful macros. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: remove unused asm/compiler.h header fileArd Biesheuvel2018-10-014-33/+0
| | | | | | | | | | arm64 does not define CONFIG_HAVE_ARCH_COMPILER_H, nor does it keep anything useful in its copy of asm/compiler.h, so let's remove it before anybody starts using it. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: compat: Provide definition for COMPAT_SIGMINSTKSZWill Deacon2018-10-011-0/+1
| | | | | | | | | | | | | | | | | arch/arm/ defines a SIGMINSTKSZ of 2k, so we should use the same value for compat tasks. Cc: Arnd Bergmann <arnd@arndb.de> Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Reported-by: Steve McIntyre <steve.mcintyre@arm.com> Tested-by: Steve McIntyre <93sam@debian.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/mm: move runtime pgds to rodataJun Yao2018-09-252-21/+24
| | | | | | | | | | | | | | | Now that deliberate writes to swapper_pg_dir are made via the fixmap, we can defend against errant writes by moving it into the rodata section. Since tramp_pg_dir and reserved_ttbr0 must be at a fixed offset from swapper_pg_dir, and are not modified at runtime, these are also moved into the rodata section. Likewise, idmap_pg_dir is not modified at runtime, and is moved into rodata. Signed-off-by: Jun Yao <yaojun8558363@gmail.com> Reviewed-by: James Morse <james.morse@arm.com> [Mark: simplify linker script, commit message] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/mm: use fixmap to modify swapper_pg_dirJun Yao2018-09-252-8/+53
| | | | | | | | | | | | | | | | Once swapper_pg_dir is in the rodata section, it will not be possible to modify it directly, but we will need to modify it in some cases. To enable this, we can use the fixmap when deliberately modifying swapper_pg_dir. As the pgd is only transiently mapped, this provides some resilience against illicit modification of the pgd, e.g. for Kernel Space Mirror Attack (KSMA). Signed-off-by: Jun Yao <yaojun8558363@gmail.com> Reviewed-by: James Morse <james.morse@arm.com> [Mark: simplify ifdeffery, commit message] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/mm: Separate boot-time page tables from swapper_pg_dirJun Yao2018-09-256-33/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the address of swapper_pg_dir is fixed for a given kernel image, it is an attractive target for manipulation via an arbitrary write. To mitigate this we'd like to make it read-only by moving it into the rodata section. We require that swapper_pg_dir is at a fixed offset from tramp_pg_dir and reserved_ttbr0, so these will also need to move into rodata. However, swapper_pg_dir is allocated along with some transient page tables used for boot which we do not want to move into rodata. As a step towards this, this patch separates the boot-time page tables into a new init_pg_dir, and reduces swapper_pg_dir to the single page it needs to be. This allows us to retain the relationship between swapper_pg_dir, tramp_pg_dir, and swapper_pg_dir, while cleanly separating these from the boot-time page tables. The init_pg_dir holds all of the pgd/pud/pmd/pte levels needed during boot, and all of these levels will be freed when we switch to the swapper_pg_dir, which is initialized by the existing code in paging_init(). Since we start off on the init_pg_dir, we no longer need to allocate a transient page table in paging_init() in order to ensure that swapper_pg_dir isn't live while we initialize it. There should be no functional change as a result of this patch. Signed-off-by: Jun Yao <yaojun8558363@gmail.com> Reviewed-by: James Morse <james.morse@arm.com> [Mark: place init_pg_dir after BSS, fold mm changes, commit message] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/mm: Pass ttbr1 as a parameter to __enable_mmu()Jun Yao2018-09-252-9/+12
| | | | | | | | | | | | | | | | | | In subsequent patches we'll use a transient pgd during the primary cpu's boot process. To make this work while allowing secondary cpus to use the swapper_pg_dir, we need to pass the relevant TTBR1 pgd as a parameter to __enable_mmu(). This patch updates __enable__mmu() to take this as a parameter, updating callsites to pass swapper_pg_dir for now. There should be no functional change as a result of this patch. Signed-off-by: Jun Yao <yaojun8558363@gmail.com> Reviewed-by: James Morse <james.morse@arm.com> [Mark: simplify assembly, clarify commit message] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: lse: remove -fcall-used-x0 flagTri Vo2018-09-241-1/+1
| | | | | | | | | | | | | x0 is not callee-saved in the PCS. So there is no need to specify -fcall-used-x0. Clang doesn't currently support -fcall-used flags. This patch will help building the kernel with clang. Tested-by: Nick Desaulniers <ndesaulniers@google.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Tri Vo <trong@android.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Remove unused VGA console supportAndrew Murray2018-09-211-4/+0
| | | | | | | | | | | | | Support for VGA_CONSOLE is not allowable due to commit ee23794b8668 ("video: vgacon: Don't build on arm64"), thus remove the associated unused code. Whilst PCI on arm64 would support VGA a valid screen_info structure is missing. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Murray <andrew.murray@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Kconfig: Remove ARCH_HAS_HOLES_MEMORYMODELJames Morse2018-09-213-8/+1
| | | | | | | | | | | | | | | | | | | include/linux/mmzone.h describes ARCH_HAS_HOLES_MEMORYMODEL as relevant when parts the memmap have been free()d. This would happen on systems where memory is smaller than a sparsemem-section, and the extra struct pages are expensive. pfn_valid() on these systems returns true for the whole sparsemem-section, so an extra memmap_valid_within() check is needed. On arm64 we have nomap memory, so always provide pfn_valid() to test for nomap pages. This means ARCH_HAS_HOLES_MEMORYMODEL's extra checks are already rolled up into pfn_valid(). Remove it. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/cpufeatures: Emulate MRS instructions by parsing ESR_ELx.ISSAnshuman Khandual2018-09-212-0/+29
| | | | | | | | | | | | | | | Armv8.4-A extension enables MRS instruction encodings inside ESR_ELx.ISS during exception class ESR_ELx_EC_SYS64 (0x18). This encoding can be used to emulate MRS instructions which can avoid fetch/decode from user space thus improving performance. This adds a new sys64_hook structure element with applicable ESR mask/value pair for MRS instructions on various system registers but constrained by sysreg encodings which is currently allowed to be emulated. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/cpufeatures: Factorize emulate_mrs()Anshuman Khandual2018-09-212-10/+16
| | | | | | | | | | | | | | | | | MRS emulation gets triggered with exception class (0x00 or 0x18) eventually calling the function emulate_mrs() which fetches the user space instruction and analyses it's encodings (OP0, OP1, OP2, CRN, CRM, RT). The kernel tries to emulate the given instruction looking into the encoding details. Going forward these encodings can also be parsed from ESR_ELx.ISS fields without requiring to fetch/decode faulting userspace instruction which can improve performance. This factorizes emulate_mrs() function in a way that it can be called directly with MRS encodings (OP0, OP1, OP2, CRN, CRM) for any given target register which can then be used directly from 0x18 exception class. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/cpufeatures: Introduce ESR_ELx_SYS64_ISS_RT()Anshuman Khandual2018-09-213-5/+7
| | | | | | | | | | | Extracting target register from ESR.ISS encoding has already been required at multiple instances. Just make it a macro definition and replace all the existing use cases. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: cpu_errata: Remove ARM64_MISMATCHED_CACHE_LINE_SIZEWill Deacon2018-09-193-20/+7
| | | | | | | | | | | | | There's no need to treat mismatched cache-line sizes reported by CTR_EL0 differently to any other mismatched fields that we treat as "STRICT" in the cpufeature code. In both cases we need to trap and emulate EL0 accesses to the register, so drop ARM64_MISMATCHED_CACHE_LINE_SIZE and rely on ARM64_MISMATCHED_CACHE_TYPE instead. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: move ARM64_HAS_CNP in the empty cpucaps.h slot] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: KVM: Enable Common Not Private translationsVladimir Murzin2018-09-185-0/+15
| | | | | | | | | | | | | | We rely on cpufeature framework to detect and enable CNP so for KVM we need to patch hyp to set CNP bit just before TTBR0_EL2 gets written. For the guest we encode CNP bit while building vttbr, so we don't need to bother with that in a world switch. Reviewed-by: James Morse <james.morse@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: mm: Support Common Not Private translationsVladimir Murzin2018-09-189-6/+88
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Common Not Private (CNP) is a feature of ARMv8.2 extension which allows translation table entries to be shared between different PEs in the same inner shareable domain, so the hardware can use this fact to optimise the caching of such entries in the TLB. CNP occupies one bit in TTBRx_ELy and VTTBR_EL2, which advertises to the hardware that the translation table entries pointed to by this TTBR are the same as every PE in the same inner shareable domain for which the equivalent TTBR also has CNP bit set. In case CNP bit is set but TTBR does not point at the same translation table entries for a given ASID and VMID, then the system is mis-configured, so the results of translations are UNPREDICTABLE. For kernel we postpone setting CNP till all cpus are up and rely on cpufeature framework to 1) patch the code which is sensitive to CNP and 2) update TTBR1_EL1 with CNP bit set. TTBR1_EL1 can be reprogrammed as result of hibernation or cpuidle (via __enable_mmu). For these two cases we restore CnP bit via __cpu_suspend_exit(). There are a few cases we need to care of changes in TTBR0_EL1: - a switch to idmap - software emulated PAN we rule out latter via Kconfig options and for the former we make sure that CNP is set for non-zero ASIDs only. Reviewed-by: James Morse <james.morse@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> [catalin.marinas@arm.com: default y for CONFIG_ARM64_CNP] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: sysreg: Clean up instructions for modifying PSTATE fieldsSuzuki K Poulose2018-09-172-13/+23
| | | | | | | | | | | | | | | | | | | | | | Instructions for modifying the PSTATE fields which were not supported in the older toolchains (e.g, PAN, UAO) are generated using macros. We have so far used the normal sys_reg() helper for defining the PSTATE fields. While this works fine, it is really difficult to correlate the code with the Arm ARM definition. As per Arm ARM, the PSTATE fields are defined only using Op1, Op2 fields, with fixed values for Op0, CRn. Also the CRm field has been reserved for the Immediate value for the instruction. So using the sys_reg() looks quite confusing. This patch cleans up the instruction helpers by bringing them in line with the Arm ARM definitions to make it easier to correlate code with the document. No functional changes. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: fix for bad_mode() handler to always result in panicHari Vyas2018-09-141-1/+0
| | | | | | | | | | | | | | | The bad_mode() handler is called if we encounter an uunknown exception, with the expectation that the subsequent call to panic() will halt the system. Unfortunately, if the exception calling bad_mode() is taken from EL0, then the call to die() can end up killing the current user task and calling schedule() instead of falling through to panic(). Remove the die() call altogether, since we really want to bring down the machine in this "impossible" case. Signed-off-by: Hari Vyas <hari.vyas@broadcom.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: force_signal_inject: WARN if called from kernel contextWill Deacon2018-09-141-1/+4
| | | | | | | | | | force_signal_inject() is designed to send a fatal signal to userspace, so WARN if the current pt_regs indicates a kernel context. This can currently happen for the undefined instruction trap, so patch that up so we always BUG() if we didn't have a handler. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: cpu: Move errata and feature enable callbacks closer to callersWill Deacon2018-09-145-29/+28
| | | | | | | | | | | | | The cpu errata and feature enable callbacks are only called via their respective arm64_cpu_capabilities structure and therefore shouldn't exist in the global namespace. Move the PAN, RAS and cache maintenance emulation enable callbacks into the same files as their corresponding arm64_cpu_capabilities structures, making them static in the process. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* KVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and !vheWill Deacon2018-09-142-0/+22
| | | | | | | | | When running without VHE, it is necessary to set SCTLR_EL2.DSSBS if SSBD has been forcefully disabled on the kernel command-line. Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3Will Deacon2018-09-148-2/+106
| | | | | | | | | | | | | On CPUs with support for PSTATE.SSBS, the kernel can toggle the SSBD state without needing to call into firmware. This patch hooks into the existing SSBD infrastructure so that SSBS is used on CPUs that support it, but it's all made horribly complicated by the very real possibility of big/little systems that don't uniformly provide the new capability. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: entry: Allow handling of undefined instructions from EL1Will Deacon2018-09-142-5/+8
| | | | | | | | | Rather than panic() when taking an undefined instruction exception from EL1, allow a hook to be registered in case we want to emulate the instruction, like we will for the SSBS PSTATE manipulation instructions. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: ssbd: Drop #ifdefs for PR_SPEC_STORE_BYPASSWill Deacon2018-09-141-3/+0
| | | | | | | | Now that we're all merged nicely into mainline, there's no need to check to see if PR_SPEC_STORE_BYPASS is defined. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: cpufeature: Detect SSBS and advertise to userspaceWill Deacon2018-09-145-7/+33
| | | | | | | | | | | | | | | | Armv8.5 introduces a new PSTATE bit known as Speculative Store Bypass Safe (SSBS) which can be used as a mitigation against Spectre variant 4. Additionally, a CPU may provide instructions to manipulate PSTATE.SSBS directly, so that userspace can toggle the SSBS control without trapping to the kernel. This patch probes for the existence of SSBS and advertise the new instructions to userspace if they exist. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Fix silly typo in commentWill Deacon2018-09-141-1/+1
| | | | | | | | | I was passing through and figuered I'd fix this up: featuer -> feature Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>