summaryrefslogtreecommitdiffstats
path: root/arch/arm64
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'arm64-upstream' of ↵Linus Torvalds2018-12-25107-1099/+3043
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 festive updates from Will Deacon: "In the end, we ended up with quite a lot more than I expected: - Support for ARMv8.3 Pointer Authentication in userspace (CRIU and kernel-side support to come later) - Support for per-thread stack canaries, pending an update to GCC that is currently undergoing review - Support for kexec_file_load(), which permits secure boot of a kexec payload but also happens to improve the performance of kexec dramatically because we can avoid the sucky purgatory code from userspace. Kdump will come later (requires updates to libfdt). - Optimisation of our dynamic CPU feature framework, so that all detected features are enabled via a single stop_machine() invocation - KPTI whitelisting of Cortex-A CPUs unaffected by Meltdown, so that they can benefit from global TLB entries when KASLR is not in use - 52-bit virtual addressing for userspace (kernel remains 48-bit) - Patch in LSE atomics for per-cpu atomic operations - Custom preempt.h implementation to avoid unconditional calls to preempt_schedule() from preempt_enable() - Support for the new 'SB' Speculation Barrier instruction - Vectorised implementation of XOR checksumming and CRC32 optimisations - Workaround for Cortex-A76 erratum #1165522 - Improved compatibility with Clang/LLD - Support for TX2 system PMUS for profiling the L3 cache and DMC - Reflect read-only permissions in the linear map by default - Ensure MMIO reads are ordered with subsequent calls to Xdelay() - Initial support for memory hotplug - Tweak the threshold when we invalidate the TLB by-ASID, so that mremap() performance is improved for ranges spanning multiple PMDs. - Minor refactoring and cleanups" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (125 commits) arm64: kaslr: print PHYS_OFFSET in dump_kernel_offset() arm64: sysreg: Use _BITUL() when defining register bits arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matches arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4 arm64: docs: document pointer authentication arm64: ptr auth: Move per-thread keys from thread_info to thread_struct arm64: enable pointer authentication arm64: add prctl control for resetting ptrauth keys arm64: perf: strip PAC when unwinding userspace arm64: expose user PAC bit positions via ptrace arm64: add basic pointer authentication support arm64/cpufeature: detect pointer authentication arm64: Don't trap host pointer auth use to EL2 arm64/kvm: hide ptrauth from guests arm64/kvm: consistently handle host HCR_EL2 flags arm64: add pointer authentication register bits arm64: add comments about EC exception levels arm64: perf: Treat EXCLUDE_EL* bit definitions as unsigned arm64: kpti: Whitelist Cortex-A CPUs that don't implement the CSV3 field arm64: enable per-task stack canaries ...
| * arm64: kaslr: print PHYS_OFFSET in dump_kernel_offset()Miles Chen2018-12-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When debug with kaslr, it is sometimes necessary to have PHYS_OFFSET to perform linear virtual address to physical address translation. Sometimes we're debugging with only few information such as a kernel log and a symbol file, print PHYS_OFFSET in dump_kernel_offset() for that case. Tested by: echo c > /proc/sysrq-trigger [ 11.996161] SMP: stopping secondary CPUs [ 11.996732] Kernel Offset: 0x2522200000 from 0xffffff8008000000 [ 11.996881] PHYS_OFFSET: 0xffffffeb40000000 Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Miles Chen <miles.chen@mediatek.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: sysreg: Use _BITUL() when defining register bitsWill Deacon2018-12-131-40/+41
| | | | | | | | | | | | | | | | | | Using shifts directly is error-prone and can cause inadvertent sign extensions or build problems with older versions of binutils. Consistent use of the _BITUL() macro makes these problems disappear. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matchesWill Deacon2018-12-133-89/+99
| | | | | | | | | | | | | | | | | | | | | | Open-coding the pointer-auth HWCAPs is a mess and can be avoided by reusing the multi-cap logic from the CPU errata framework. Move the multi_entry_cap_matches code to cpufeature.h and reuse it for the pointer auth HWCAPs. Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4Will Deacon2018-12-133-17/+8
| | | | | | | | | | | | | | | | | | | | We can easily avoid defining the two meta-capabilities for the address and generic keys, so remove them and instead just check both of the architected and impdef capabilities when determining the level of system support. Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: ptr auth: Move per-thread keys from thread_info to thread_structWill Deacon2018-12-134-8/+7
| | | | | | | | | | | | | | | | | | | | | | We don't need to get at the per-thread keys from assembly at all, so they can live alongside the rest of the per-thread register state in thread_struct instead of thread_info. This will also allow straighforward whitelisting of the keys for hardened usercopy should we expose them via a ptrace request later on. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: enable pointer authenticationMark Rutland2018-12-131-0/+23
| | | | | | | | | | | | | | | | | | | | | | Now that all the necessary bits are in place for userspace, add the necessary Kconfig logic to allow this to be enabled. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: add prctl control for resetting ptrauth keysKristina Martsenko2018-12-134-0/+55
| | | | | | | | | | | | | | | | | | | | Add an arm64-specific prctl to allow a thread to reinitialize its pointer authentication keys to random values. This can be useful when exec() is not used for starting new processes, to ensure that different processes still have different keys. Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: perf: strip PAC when unwinding userspaceMark Rutland2018-12-132-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the kernel is unwinding userspace callchains, we can't expect that the userspace consumer of these callchains has the data necessary to strip the PAC from the stored LR. This patch has the kernel strip the PAC from user stackframes when the in-kernel unwinder is used. This only affects the LR value, and not the FP. This only affects the in-kernel unwinder. When userspace performs unwinding, it is up to userspace to strip PACs as necessary (which can be determined from DWARF information). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ramana Radhakrishnan <ramana.radhakrishnan@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: expose user PAC bit positions via ptraceMark Rutland2018-12-135-2/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When pointer authentication is in use, data/instruction pointers have a number of PAC bits inserted into them. The number and position of these bits depends on the configured TCR_ELx.TxSZ and whether tagging is enabled. ARMv8.3 allows tagging to differ for instruction and data pointers. For userspace debuggers to unwind the stack and/or to follow pointer chains, they need to be able to remove the PAC bits before attempting to use a pointer. This patch adds a new structure with masks describing the location of the PAC bits in userspace instruction and data pointers (i.e. those addressable via TTBR0), which userspace can query via PTRACE_GETREGSET. By clearing these bits from pointers (and replacing them with the value of bit 55), userspace can acquire the PAC-less versions. This new regset is exposed when the kernel is built with (user) pointer authentication support, and the address authentication feature is enabled. Otherwise, the regset is hidden. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ramana Radhakrishnan <ramana.radhakrishnan@arm.com> Cc: Will Deacon <will.deacon@arm.com> [will: Fix to use vabits_user instead of VA_BITS and rename macro] Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: add basic pointer authentication supportMark Rutland2018-12-136-0/+104
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds basic support for pointer authentication, allowing userspace to make use of APIAKey, APIBKey, APDAKey, APDBKey, and APGAKey. The kernel maintains key values for each process (shared by all threads within), which are initialised to random values at exec() time. The ID_AA64ISAR1_EL1.{APA,API,GPA,GPI} fields are exposed to userspace, to describe that pointer authentication instructions are available and that the kernel is managing the keys. Two new hwcaps are added for the same reason: PACA (for address authentication) and PACG (for generic authentication). Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Tested-by: Adam Wallis <awallis@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ramana Radhakrishnan <ramana.radhakrishnan@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> [will: Fix sizeof() usage and unroll address key initialisation] Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64/cpufeature: detect pointer authenticationMark Rutland2018-12-133-1/+109
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | So that we can dynamically handle the presence of pointer authentication functionality, wire up probing code in cpufeature.c. From ARMv8.3 onwards, ID_AA64ISAR1 is no longer entirely RES0, and now has four fields describing the presence of pointer authentication functionality: * APA - address authentication present, using an architected algorithm * API - address authentication present, using an IMP DEF algorithm * GPA - generic authentication present, using an architected algorithm * GPI - generic authentication present, using an IMP DEF algorithm This patch checks for both address and generic authentication, separately. It is assumed that if all CPUs support an IMP DEF algorithm, the same algorithm is used across all CPUs. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Don't trap host pointer auth use to EL2Mark Rutland2018-12-131-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To allow EL0 (and/or EL1) to use pointer authentication functionality, we must ensure that pointer authentication instructions and accesses to pointer authentication keys are not trapped to EL2. This patch ensures that HCR_EL2 is configured appropriately when the kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK}, ensuring that EL1 can access keys and permit EL0 use of instructions. For VHE kernels host EL0 (TGE && E2H) is unaffected by these settings, and it doesn't matter how we configure HCR_EL2.{API,APK}, so we don't bother setting them. This does not enable support for KVM guests, since KVM manages HCR_EL2 itself when running VMs. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Acked-by: Christoffer Dall <christoffer.dall@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64/kvm: hide ptrauth from guestsMark Rutland2018-12-132-0/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In subsequent patches we're going to expose ptrauth to the host kernel and userspace, but things are a bit trickier for guest kernels. For the time being, let's hide ptrauth from KVM guests. Regardless of how well-behaved the guest kernel is, guest userspace could attempt to use ptrauth instructions, triggering a trap to EL2, resulting in noise from kvm_handle_unknown_ec(). So let's write up a handler for the PAC trap, which silently injects an UNDEF into the guest, as if the feature were really missing. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64/kvm: consistently handle host HCR_EL2 flagsMark Rutland2018-12-133-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In KVM we define the configuration of HCR_EL2 for a VHE HOST in HCR_HOST_VHE_FLAGS, but we don't have a similar definition for the non-VHE host flags, and open-code HCR_RW. Further, in head.S we open-code the flags for VHE and non-VHE configurations. In future, we're going to want to configure more flags for the host, so lets add a HCR_HOST_NVHE_FLAGS defintion, and consistently use both HCR_HOST_VHE_FLAGS and HCR_HOST_NVHE_FLAGS in the kvm code and head.S. We now use mov_q to generate the HCR_EL2 value, as we use when configuring other registers in head.S. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: add pointer authentication register bitsMark Rutland2018-12-132-1/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ARMv8.3 pointer authentication extension adds: * New fields in ID_AA64ISAR1 to report the presence of pointer authentication functionality. * New control bits in SCTLR_ELx to enable this functionality. * New system registers to hold the keys necessary for this functionality. * A new ESR_ELx.EC code used when the new instructions are affected by configurable traps This patch adds the relevant definitions to <asm/sysreg.h> and <asm/esr.h> for these, to be used by subsequent patches. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: add comments about EC exception levelsKristina Martsenko2018-12-131-7/+7
| | | | | | | | | | | | | | | | | | To make it clear which exceptions can't be taken to EL1 or EL2, add comments next to the ESR_ELx_EC_* macro definitions. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: perf: Treat EXCLUDE_EL* bit definitions as unsignedWill Deacon2018-12-131-3/+3
| | | | | | | | | | | | | | | | | | Although the upper 32 bits of the PMEVTYPER<n>_EL0 registers are RES0, we should treat the EXCLUDE_EL* bit definitions as unsigned so that we avoid accidentally sign-extending the privilege filtering bit (bit 31) into the upper half of the register. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: kpti: Whitelist Cortex-A CPUs that don't implement the CSV3 fieldWill Deacon2018-12-131-0/+6
| | | | | | | | | | | | | | | | | | | | | | While the CSV3 field of the ID_AA64_PFR0 CPU ID register can be checked to see if a CPU is susceptible to Meltdown and therefore requires kpti to be enabled, existing CPUs do not implement this field. We therefore whitelist all unaffected Cortex-A CPUs that do not implement the CSV3 field. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * Merge branch 'for-next/perf' into aarch64/for-next/coreWill Deacon2018-12-122-176/+209
| |\ | | | | | | | | | | | | Merge in arm64 perf and PMU driver updates, including support for the system/uncore PMU in the ThunderX2 platform.
| | * arm64: perf: set suppress_bind_attrs flag to trueAnders Roxell2018-11-211-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The armv8_pmuv3 driver doesn't have a remove function, and when the test 'CONFIG_DEBUG_TEST_DRIVER_REMOVE=y' is enabled, the following Call trace can be seen. [ 1.424287] Failed to register pmu: armv8_pmuv3, reason -17 [ 1.424870] WARNING: CPU: 0 PID: 1 at ../kernel/events/core.c:11771 perf_event_sysfs_init+0x98/0xdc [ 1.425220] Modules linked in: [ 1.425531] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 4.19.0-rc7-next-20181012-00003-ge7a97b1ad77b-dirty #35 [ 1.425951] Hardware name: linux,dummy-virt (DT) [ 1.426212] pstate: 80000005 (Nzcv daif -PAN -UAO) [ 1.426458] pc : perf_event_sysfs_init+0x98/0xdc [ 1.426720] lr : perf_event_sysfs_init+0x98/0xdc [ 1.426908] sp : ffff00000804bd50 [ 1.427077] x29: ffff00000804bd50 x28: ffff00000934e078 [ 1.427429] x27: ffff000009546000 x26: 0000000000000007 [ 1.427757] x25: ffff000009280710 x24: 00000000ffffffef [ 1.428086] x23: ffff000009408000 x22: 0000000000000000 [ 1.428415] x21: ffff000009136008 x20: ffff000009408730 [ 1.428744] x19: ffff80007b20b400 x18: 000000000000000a [ 1.429075] x17: 0000000000000000 x16: 0000000000000000 [ 1.429418] x15: 0000000000000400 x14: 2e79726f74636572 [ 1.429748] x13: 696420656d617320 x12: 656874206e692065 [ 1.430060] x11: 6d616e20656d6173 x10: 2065687420687469 [ 1.430335] x9 : ffff00000804bd50 x8 : 206e6f7361657220 [ 1.430610] x7 : 2c3376756d705f38 x6 : ffff00000954d7ce [ 1.430880] x5 : 0000000000000000 x4 : 0000000000000000 [ 1.431226] x3 : 0000000000000000 x2 : ffffffffffffffff [ 1.431554] x1 : 4d151327adc50b00 x0 : 0000000000000000 [ 1.431868] Call trace: [ 1.432102] perf_event_sysfs_init+0x98/0xdc [ 1.432382] do_one_initcall+0x6c/0x1a8 [ 1.432637] kernel_init_freeable+0x1bc/0x280 [ 1.432905] kernel_init+0x18/0x160 [ 1.433115] ret_from_fork+0x10/0x18 [ 1.433297] ---[ end trace 27fd415390eb9883 ]--- Rework to set suppress_bind_attrs flag to avoid removing the device when CONFIG_DEBUG_TEST_DRIVER_REMOVE=y, since there's no real reason to remove the armv8_pmuv3 driver. Cc: Arnd Bergmann <arnd@arndb.de> Co-developed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * arm64: perf: Fix typos in commentShaokun Zhang2018-11-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Fix up one typos: Onl -> Only Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * arm64: perf: Hook up new eventsWill Deacon2018-11-211-0/+24
| | | | | | | | | | | | | | | | | | | | | There have been some additional events added to the PMU architecture since Armv8.0, so expose them via our sysfs infrastructure. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * arm64: perf: Move event definitions into perf_event.hWill Deacon2018-11-212-154/+155
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The PMU event numbers are split between perf_event.h and perf_event.c, which makes it difficult to spot any gaps in the numbers which may be allocated in the future. This patch sorts the events numerically, adds some missing events and moves the definitions into perf_event.h. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * arm64: perf: Remove duplicate generic cache eventsWill Deacon2018-11-211-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | We cannot distinguish reads from writes in our generic cache events, so drop the WRITE entries and leave the READ entries pointing to the combined read/write events, as is done by other CPUs and architectures. Reported-by: Ganapatrao Kulkarni <Ganapatrao.Kulkarni@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * arm64: perf: Add support for Armv8.1 PMCEID register formatWill Deacon2018-11-211-7/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Armv8.1 allocated the upper 32-bits of the PMCEID registers to describe the common architectural and microarchitecture events beginning at 0x4000. Add support for these registers to our probing code, so that we can advertise the SPE events when they are supported by the CPU. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * arm64: perf: Terminate PMU assignment statements with semicolonsWill Deacon2018-11-211-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | As a hangover from when this code used a designated initialiser, we've been using commas to terminate the arm_pmu field assignments. Whilst harmless, it's also weird, so replace them with semicolons instead. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: enable per-task stack canariesArd Biesheuvel2018-12-125-2/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This enables the use of per-task stack canary values if GCC has support for emitting the stack canary reference relative to the value of sp_el0, which holds the task struct pointer in the arm64 kernel. The $(eval) extends KBUILD_CFLAGS at the moment the make rule is applied, which means asm-offsets.o (which we rely on for the offset value) is built without the arguments, and everything built afterwards has the options set. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: Add memory hotplug supportRobin Murphy2018-12-124-0/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Wire up the basic support for hot-adding memory. Since memory hotplug is fairly tightly coupled to sparsemem, we tweak pfn_valid() to also cross-check the presence of a section in the manner of the generic implementation, before falling back to memblock to check for no-map regions within a present section as before. By having arch_add_memory(() create the linear mapping first, this then makes everything work in the way that __add_section() expects. We expect hotplug to be ACPI-driven, so the swapper_pg_dir updates should be safe from races by virtue of the global device hotplug lock. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: percpu: Fix LSE implementation of value-returning pcpu atomicsWill Deacon2018-12-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 959bf2fd03b5 ("arm64: percpu: Rewrite per-cpu ops to allow use of LSE atomics") introduced alternative code sequences for the arm64 percpu atomics, so that the LSE instructions can be patched in at runtime if they are supported by the CPU. Unfortunately, when patching in the LSE sequence for a value-returning pcpu atomic, the argument registers are the wrong way round. The implementation of this_cpu_add_return() therefore ends up adding uninitialised stack to the percpu variable and returning garbage. As it turns out, there aren't very many users of the value-returning percpu atomics in mainline and we only spotted this due to a failure in the kprobes selftests. In this case, when attempting to single-step over the out-of-line instruction slot, the debug monitors would not be enabled because calling this_cpu_inc_return() on the kernel debug monitor refcount would fail to detect the transition from 0. We would consequently execute past the slot and take an undefined instruction exception from the kernel, resulting in a BUG: | kernel BUG at arch/arm64/kernel/traps.c:421! | PREEMPT SMP | pc : do_undefinstr+0x268/0x278 | lr : do_undefinstr+0x124/0x278 | Process swapper/0 (pid: 1, stack limit = 0x(____ptrval____)) | Call trace: | do_undefinstr+0x268/0x278 | el1_undef+0x10/0x78 | 0xffff00000803c004 | init_kprobes+0x150/0x180 | do_one_initcall+0x74/0x178 | kernel_init_freeable+0x188/0x224 | kernel_init+0x10/0x100 | ret_from_fork+0x10/0x1c Fix the argument order to get the value-returning pcpu atomics working correctly when implemented using the LSE instructions. Reported-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: add <asm/asm-prototypes.h>Mark Rutland2018-12-121-0/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While we can export symbols from assembly files, CONFIG_MODVERIONS requires C declarations of anyhting that's exported. Let's account for this as other architectures do by placing these declarations in <asm/asm-prototypes.h>, which kbuild will automatically use to generate modversion information for assembly files. Since we already define most prototypes in existing headers, we simply need to include those headers in <asm/asm-prototypes.h>, and don't need to duplicate these. Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: mm: Introduce MAX_USER_VA_BITS definitionWill Deacon2018-12-123-10/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the introduction of 52-bit virtual addressing for userspace, we are now in a position where the virtual addressing capability of userspace may exceed that of the kernel. Consequently, the VA_BITS definition cannot be used blindly, since it reflects only the size of kernel virtual addresses. This patch introduces MAX_USER_VA_BITS which is either VA_BITS or 52 depending on whether 52-bit virtual addressing has been configured at build time, removing a few places where the 52 is open-coded based on explicit CONFIG_ guards. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: fix ARM64_USER_VA_BITS_52 buildsArnd Bergmann2018-12-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some randconfig builds, the new CONFIG_ARM64_USER_VA_BITS_52 triggered a build failure: arch/arm64/mm/proc.S:287: Error: immediate out of range As it turns out, we were incorrectly setting PGTABLE_LEVELS here, lacking any other default value. This fixes the calculation of CONFIG_PGTABLE_LEVELS to consider all combinations again. Fixes: 68d23da4373a ("arm64: Kconfig: Re-jig CONFIG options for 52-bit VA") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: preempt: Fix big-endian when checking preempt count in assemblyWill Deacon2018-12-112-9/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 396244692232 ("arm64: preempt: Provide our own implementation of asm/preempt.h") extended the preempt count field in struct thread_info to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag indicating whether or not the current task needs rescheduling. Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point to this new field, the assembly usage was left untouched meaning that a 32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns the reschedule flag instead of the count. Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field, we're actually better off reworking the two assembly users so that they operate on the whole 64-bit value in favour of inspecting the thread flags separately in order to determine whether a reschedule is needed. Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reported-by: "kernelci.org bot" <bot@kernelci.org> Tested-by: Kevin Hilman <khilman@baylibre.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: kexec_file: include linux/vmalloc.hArnd Bergmann2018-12-111-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is needed for compilation in some configurations that don't include it implicitly: arch/arm64/kernel/machine_kexec_file.c: In function 'arch_kimage_file_post_load_cleanup': arch/arm64/kernel/machine_kexec_file.c:37:2: error: implicit declaration of function 'vfree'; did you mean 'kvfree'? [-Werror=implicit-function-declaration] Fixes: 52b2a8af7436 ("arm64: kexec_file: load initrd and device-tree") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: mm: EXPORT vabits_user to modulesWill Deacon2018-12-101-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TASK_SIZE is defined using the vabits_user variable for 64-bit tasks, so ensure that this variable is exported to modules to avoid the following build breakage with allmodconfig: | ERROR: "vabits_user" [lib/test_user_copy.ko] undefined! | ERROR: "vabits_user" [drivers/misc/lkdtm/lkdtm.ko] undefined! | ERROR: "vabits_user" [drivers/infiniband/hw/mlx5/mlx5_ib.ko] undefined! Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | Merge branch 'for-next/kexec' into aarch64/for-next/coreWill Deacon2018-12-1012-17/+545
| |\ \ | | | | | | | | | | | | Merge in kexec_file_load() support from Akashi Takahiro.
| | * | arm64: kexec_file: forbid kdump via kexec_file_load()James Morse2018-12-071-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that kexec_walk_memblock() can do the crash-kernel placement itself architectures that don't support kdump via kexe_file_load() need to explicitly forbid it. We don't support this on arm64 until the kernel can add the elfcorehdr and usable-memory-range fields to the DT. Without these the crash-kernel overwrites the previous kernel's memory during startup. Add a check to refuse crash image loading. Reviewed-by: Bhupesh Sharma <bhsharma@redhat.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm64: kexec_file: Refactor setup_dtb() to consolidate error checkingWill Deacon2018-12-061-30/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | setup_dtb() is a little difficult to read. This is largely because it duplicates the FDT -> Linux errno conversion for every intermediate return value, but also because of silly cosmetic things like naming and formatting. Given that this is all brand new, refactor the function to get us off on the right foot. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm64: kexec_file: add kaslr supportAKASHI Takahiro2018-12-061-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adding "kaslr-seed" to dtb enables triggering kaslr, or kernel virtual address randomization, at secondary kernel boot. We always do this as it will have no harm on kaslr-incapable kernel. We don't have any "switch" to turn off this feature directly, but still can suppress it by passing "nokaslr" as a kernel boot argument. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> [will: Use rng_is_initialized()] Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm64: kexec_file: add kernel signature verification supportAKASHI Takahiro2018-12-062-5/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With this patch, kernel verification can be done without IMA security subsystem enabled. Turn on CONFIG_KEXEC_VERIFY_SIG instead. On x86, a signature is embedded into a PE file (Microsoft's format) header of binary. Since arm64's "Image" can also be seen as a PE file as far as CONFIG_EFI is enabled, we adopt this format for kernel signing. You can create a signed kernel image with: $ sbsign --key ${KEY} --cert ${CERT} Image Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: James Morse <james.morse@arm.com> [will: removed useless pr_debug()] Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm64: kexec_file: invoke the kernel without purgatoryAKASHI Takahiro2018-12-063-7/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On arm64, purgatory would do almost nothing. So just invoke secondary kernel directly by jumping into its entry code. While, in this case, cpu_soft_restart() must be called with dtb address in the fifth argument, the behavior still stays compatible with kexec_load case as long as the argument is null. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: James Morse <james.morse@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm64: kexec_file: allow for loading Image-format kernelAKASHI Takahiro2018-12-064-1/+117
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch provides kexec_file_ops for "Image"-format kernel. In this implementation, a binary is always loaded with a fixed offset identified in text_offset field of its header. Regarding signature verification for trusted boot, this patch doesn't contains CONFIG_KEXEC_VERIFY_SIG support, which is to be added later in this series, but file-attribute-based verification is still a viable option by enabling IMA security subsystem. You can sign(label) a to-be-kexec'ed kernel image on target file system with: $ evmctl ima_sign --key /path/to/private_key.pem Image On live system, you must have IMA enforced with, at least, the following security policy: "appraise func=KEXEC_KERNEL_CHECK appraise_type=imasig" See more details about IMA here: https://sourceforge.net/p/linux-ima/wiki/Home/ Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm64: kexec_file: load initrd and device-treeAKASHI Takahiro2018-12-062-0/+202
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | load_other_segments() is expected to allocate and place all the necessary memory segments other than kernel, including initrd and device-tree blob (and elf core header for crash). While most of the code was borrowed from kexec-tools' counterpart, users may not be allowed to specify dtb explicitly, instead, the dtb presented by the original boot loader is reused. arch_kimage_kernel_post_load_cleanup() is responsible for freeing arm64- specific data allocated in load_other_segments(). Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm64: enable KEXEC_FILE configAKASHI Takahiro2018-12-063-1/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Modify arm64/Kconfig to enable kexec_file_load support. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm64: cpufeature: add MMFR0 helper functionsAKASHI Takahiro2018-12-061-0/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Those helper functions for MMFR0 register will be used later by kexec_file loader. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm64: add image head flag definitionsAKASHI Takahiro2018-12-063-9/+74
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | Those image head's flags will be used later by kexec_file loader. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | Merge branch 'kvm/cortex-a76-erratum-1165522' into aarch64/for-next/coreWill Deacon2018-12-108-16/+123
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | Pull in KVM workaround for A76 erratum #116522. Conflicts: arch/arm64/include/asm/cpucaps.h
| | * | arm64: Add configuration/documentation for Cortex-A76 erratum 1165522Marc Zyngier2018-12-101-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the infrastructure to handle erratum 1165522 is in place, let's make it a selectable option and add the required documentation. Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm64: KVM: Handle ARM erratum 1165522 in TLB invalidationMarc Zyngier2018-12-101-15/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to avoid TLB corruption whilst invalidating TLBs on CPUs affected by erratum 1165522, we need to prevent S1 page tables from being usable. For this, we set the EL1 S1 MMU on, and also disable the page table walker (by setting the TCR_EL1.EPD* bits to 1). This ensures that once we switch to the EL1/EL0 translation regime, speculated AT instructions won't be able to parse the page tables. Acked-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>