summaryrefslogtreecommitdiffstats
path: root/arch/arm64/kernel/cpufeature.c
Commit message (Collapse)AuthorAgeFilesLines
* arm64/prefetch: fix a -Wtype-limits warningQian Cai2019-10-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit b99286b088ea843b935dcfb29f187697359fe5cd ] The commit d5370f754875 ("arm64: prefetch: add alternative pattern for CPUs without a prefetcher") introduced MIDR_IS_CPU_MODEL_RANGE() to be used in has_no_hw_prefetch() with rv_min=0 which generates a compilation warning from GCC, In file included from ./arch/arm64/include/asm/cache.h:8, from ./include/linux/cache.h:6, from ./include/linux/printk.h:9, from ./include/linux/kernel.h:15, from ./include/linux/cpumask.h:10, from arch/arm64/kernel/cpufeature.c:11: arch/arm64/kernel/cpufeature.c: In function 'has_no_hw_prefetch': ./arch/arm64/include/asm/cputype.h:59:26: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits] _model == (model) && rv >= (rv_min) && rv <= (rv_max); \ ^~ arch/arm64/kernel/cpufeature.c:889:9: note: in expansion of macro 'MIDR_IS_CPU_MODEL_RANGE' return MIDR_IS_CPU_MODEL_RANGE(midr, MIDR_THUNDERX, ^~~~~~~~~~~~~~~~~~~~~~~ Fix it by converting MIDR_IS_CPU_MODEL_RANGE to a static inline function. Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: cpufeature: Don't treat granule sizes as strictWill Deacon2019-09-061-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 5717fe5ab38f9ccb32718bcb03bea68409c9cce4 ] If a CPU doesn't support the page size for which the kernel is configured, then we will complain and refuse to bring it online. For secondary CPUs (and the boot CPU on a system booting with EFI), we will also print an error identifying the mismatch. Consequently, the only time that the cpufeature code can detect a granule size mismatch is for a granule other than the one that is currently being used. Although we would rather such systems didn't exist, we've unfortunately lost that battle and Kevin reports that on his amlogic S922X (odroid-n2 board) we end up warning and taining with defconfig because 16k pages are not supported by all of the CPUs. In such a situation, we don't actually care about the feature mismatch, particularly now that KVM only exposes the sanitised view of the CPU registers (commit 93390c0a1b20 - "arm64: KVM: Hide unsupported AArch64 CPU features from guests"). Treat the granule fields as non-strict and let Kevin run without a tainted kernel. Cc: Marc Zyngier <maz@kernel.org> Reported-by: Kevin Hilman <khilman@baylibre.com> Tested-by: Kevin Hilman <khilman@baylibre.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will@kernel.org> [catalin.marinas@arm.com: changelog updated with KVM sanitised regs commit] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
* arm64: cpufeature: Fix feature comparison for CTR_EL0.{CWG,ERG}Will Deacon2019-08-061-2/+6
| | | | | | | | | | | | | | | | | | | | | commit 147b9635e6347104b91f48ca9dca61eb0fbf2a54 upstream. If CTR_EL0.{CWG,ERG} are 0b0000 then they must be interpreted to have their architecturally maximum values, which defeats the use of FTR_HIGHER_SAFE when sanitising CPU ID registers on heterogeneous machines. Introduce FTR_HIGHER_OR_ZERO_SAFE so that these fields effectively saturate at zero. Fixes: 3c739b571084 ("arm64: Keep track of CPU feature registers") Cc: <stable@vger.kernel.org> # 4.4.x- Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner2019-06-191-12/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* arm64: cpufeature: Fix missing ZFR0 in __read_sysreg_by_encoding()Dave Martin2019-06-051-0/+1
| | | | | | | | | | | | | | | | | | | | | In commit 06a916feca2b ("arm64: Expose SVE2 features for userspace"), new hwcaps are added that are detected via fields in the SVE-specific ID register ID_AA64ZFR0_EL1. In order to check compatibility of secondary cpus with the hwcaps established at boot, the cpufeatures code uses __read_sysreg_by_encoding() to read this ID register based on the sys_reg field of the arm64_elf_hwcaps[] table. This leads to a kernel splat if an hwcap uses an ID register that __read_sysreg_by_encoding() doesn't explicitly handle, as now happens when exercising cpu hotplug on an SVE2-capable platform. So fix it by adding the required case in there. Fixes: 06a916feca2b ("arm64: Expose SVE2 features for userspace") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2019-05-171-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull KVM updates from Paolo Bonzini: "ARM: - support for SVE and Pointer Authentication in guests - PMU improvements POWER: - support for direct access to the POWER9 XIVE interrupt controller - memory and performance optimizations x86: - support for accessing memory not backed by struct page - fixes and refactoring Generic: - dirty page tracking improvements" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (155 commits) kvm: fix compilation on aarch64 Revert "KVM: nVMX: Expose RDPMC-exiting only when guest supports PMU" kvm: x86: Fix L1TF mitigation for shadow MMU KVM: nVMX: Disable intercept for FS/GS base MSRs in vmcs02 when possible KVM: PPC: Book3S: Remove useless checks in 'release' method of KVM device KVM: PPC: Book3S HV: XIVE: Fix spelling mistake "acessing" -> "accessing" KVM: PPC: Book3S HV: Make sure to load LPID for radix VCPUs kvm: nVMX: Set nested_run_pending in vmx_set_nested_state after checks complete tests: kvm: Add tests for KVM_SET_NESTED_STATE KVM: nVMX: KVM_SET_NESTED_STATE - Tear down old EVMCS state before setting new state tests: kvm: Add tests for KVM_CAP_MAX_VCPUS and KVM_CAP_MAX_CPU_ID tests: kvm: Add tests to .gitignore KVM: Introduce KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 KVM: Fix kvm_clear_dirty_log_protect off-by-(minus-)one KVM: Fix the bitmap range to copy during clear dirty KVM: arm64: Fix ptrauth ID register masking logic KVM: x86: use direct accessors for RIP and RSP KVM: VMX: Use accessors for GPRs outside of dedicated caching logic KVM: x86: Omit caching logic for always-available GPRs kvm, x86: Properly check whether a pfn is an MMIO or not ...
| * arm64/sve: Check SVE virtualisabilityDave Martin2019-03-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Due to the way the effective SVE vector length is controlled and trapped at different exception levels, certain mismatches in the sets of vector lengths supported by different physical CPUs in the system may prevent straightforward virtualisation of SVE at parity with the host. This patch analyses the extent to which SVE can be virtualised safely without interfering with migration of vcpus between physical CPUs, and rejects late secondary CPUs that would erode the situation further. It is left up to KVM to decide what to do with this information. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Tested-by: zhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* | Merge branch 'for-next/mitigations' of ↵Will Deacon2019-05-011-15/+51
|\ \ | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux into for-next/core
| * | arm64/speculation: Support 'mitigations=' cmdline optionJosh Poimboeuf2019-05-011-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Configure arm64 runtime CPU speculation bug mitigations in accordance with the 'mitigations=' cmdline option. This affects Meltdown, Spectre v2, and Speculative Store Bypass. The default behavior is unchanged. Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> [will: reorder checks so KASLR implies KPTI and SSBS is affected by cmdline] Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: add sysfs vulnerability show for meltdownJeremy Linton2019-04-261-14/+44
| |/ | | | | | | | | | | | | | | | | | | | | | | We implement page table isolation as a mitigation for meltdown. Report this to userspace via sysfs. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Stefan Wahren <stefan.wahren@i2se.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | arm64: Expose SVE2 features for userspaceDave Martin2019-04-231-1/+16
| | | | | | | | | | | | | | | | | | | | | | This patch provides support for reporting the presence of SVE2 and its optional features to userspace. This will also enable visibility of SVE2 for guests, when KVM support for SVE-enabled guests is available. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | arm64: Advertise ARM64_HAS_DCPODP cpu featureAndrew Murray2019-04-161-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | Advertise ARM64_HAS_DCPODP when both DC CVAP and DC CVADP are supported. Even though we don't use this feature now, we provide it for consistency with DCPOP and anticipate it being used in the future. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | arm64: Expose DC CVADP to userspaceAndrew Murray2019-04-161-0/+1
| | | | | | | | | | | | | | | | | | | | ARMv8.5 builds upon the ARMv8.2 DC CVAP instruction by introducing a DC CVADP instruction which cleans the data cache to the point of deep persistence. Let's expose this support via the arm64 ELF hwcaps. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | arm64: HWCAP: encapsulate elf_hwcapAndrew Murray2019-04-161-2/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | The introduction of AT_HWCAP2 introduced accessors which ensure that hwcap features are set and tested appropriately. Let's now mandate access to elf_hwcap via these accessors by making elf_hwcap static within cpufeature.c. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | arm64: HWCAP: add support for AT_HWCAP2Andrew Murray2019-04-161-33/+33
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | As we will exhaust the first 32 bits of AT_HWCAP let's start exposing AT_HWCAP2 to userspace to give us up to 64 caps. Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we prefer to expand into AT_HWCAP2 in order to provide a consistent view to userspace between ILP32 and LP64. However internal to the kernel we prefer to continue to use the full space of elf_hwcap. To reduce complexity and allow for future expansion, we now represent hwcaps in the kernel as ordinals and use a KERNEL_HWCAP_ prefix. This allows us to support automatic feature based module loading for all our hwcaps. We introduce cpu_set_feature to set hwcaps which complements the existing cpu_have_feature helper. These helpers allow us to clean up existing direct uses of elf_hwcap and reduce any future effort required to move beyond 64 caps. For convenience we also introduce cpu_{have,set}_named_feature which makes use of the cpu_feature macro to allow providing a hwcap name without a {KERNEL_}HWCAP_ prefix. Signed-off-by: Andrew Murray <andrew.murray@arm.com> [will: use const_ilog2() and tweak documentation] Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: kpti: Whitelist HiSilicon Taishan v110 CPUsHanjun Guo2019-03-191-0/+1
| | | | | | | | | | | HiSilicon Taishan v110 CPUs didn't implement CSV3 field of the ID_AA64PFR0_EL1 and are not susceptible to Meltdown, so whitelist the MIDR in kpti_safe_list[] table. Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> Reviewed-by: John Garry <john.garry@huawei.com> Reviewed-by: Zhangshaokun <zhangshaokun@hisilicon.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Enable the support of pseudo-NMIsJulien Thierry2019-02-061-1/+9
| | | | | | | | | | Add a build option and a command line parameter to build and enable the support of pseudo-NMIs. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Suggested-by: Daniel Thompson <daniel.thompson@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: alternative: Apply alternatives early in boot processDaniel Thompson2019-02-061-0/+6
| | | | | | | | | | | | | | | | | | | | | Currently alternatives are applied very late in the boot process (and a long time after we enable scheduling). Some alternative sequences, such as those that alter the way CPU context is stored, must be applied much earlier in the boot sequence. Introduce apply_boot_alternatives() to allow some alternatives to be applied immediately after we detect the CPU features of the boot CPU. Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> [julien.thierry@arm.com: rename to fit new cpufeature framework better, apply BOOT_SCOPE feature early in boot] Signed-off-by: Julien Thierry <julien.thierry@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: alternative: Allow alternative status checking per cpufeatureJulien Thierry2019-02-061-1/+1
| | | | | | | | | | | | | | | | | | In preparation for the application of alternatives at different points during the boot process, provide the possibility to check whether alternatives for a feature of interest was already applied instead of having a global boolean for all alternatives. Make VHE enablement code check for the VHE feature instead of considering all alternatives. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Marc Zyngier <Marc.Zyngier@arm.com> Cc: Christoffer Dall <Christoffer.Dall@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: cpufeature: Add cpufeature for IRQ priority maskingJulien Thierry2019-02-061-0/+23
| | | | | | | | | | | | | | | | | Add a cpufeature indicating whether a cpu supports masking interrupts by priority. The feature will be properly enabled in a later patch. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: cpufeature: Set SYSREG_GIC_CPUIF as a boot system featureJulien Thierry2019-02-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | It is not supported to have some CPUs using GICv3 sysreg CPU interface while some others do not. Once ICC_SRE_EL1.SRE is set on a CPU, the bit cannot be cleared. Since matching this feature require setting ICC_SRE_EL1.SRE, it cannot be turned off if found on a CPU. Set the feature as STRICT_BOOT, if boot CPU has it, all other CPUs are required to have it. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Suggested-by: Daniel Thompson <daniel.thompson@linaro.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: kpti: Avoid rewriting early page tables when KASLR is enabledWill Deacon2019-01-101-2/+7
| | | | | | | | | | | | | | | | | | | A side effect of commit c55191e96caa ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well") is that the linear map is created with page granularity, which means that transitioning the early page table from global to non-global mappings when enabling kpti can take a significant amount of time during boot. Given that most CPU implementations do not require kpti, this mainly impacts KASLR builds where kpti is forcefully enabled. However, in these situations we know early on that non-global mappings are required and can avoid the use of global mappings from the beginning. The only gotcha is Cavium erratum #27456, which we must detect based on the MIDR value of the boot CPU. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reported-by: John Garry <john.garry@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* Merge tag 'arm64-upstream' of ↵Linus Torvalds2018-12-251-83/+229
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 festive updates from Will Deacon: "In the end, we ended up with quite a lot more than I expected: - Support for ARMv8.3 Pointer Authentication in userspace (CRIU and kernel-side support to come later) - Support for per-thread stack canaries, pending an update to GCC that is currently undergoing review - Support for kexec_file_load(), which permits secure boot of a kexec payload but also happens to improve the performance of kexec dramatically because we can avoid the sucky purgatory code from userspace. Kdump will come later (requires updates to libfdt). - Optimisation of our dynamic CPU feature framework, so that all detected features are enabled via a single stop_machine() invocation - KPTI whitelisting of Cortex-A CPUs unaffected by Meltdown, so that they can benefit from global TLB entries when KASLR is not in use - 52-bit virtual addressing for userspace (kernel remains 48-bit) - Patch in LSE atomics for per-cpu atomic operations - Custom preempt.h implementation to avoid unconditional calls to preempt_schedule() from preempt_enable() - Support for the new 'SB' Speculation Barrier instruction - Vectorised implementation of XOR checksumming and CRC32 optimisations - Workaround for Cortex-A76 erratum #1165522 - Improved compatibility with Clang/LLD - Support for TX2 system PMUS for profiling the L3 cache and DMC - Reflect read-only permissions in the linear map by default - Ensure MMIO reads are ordered with subsequent calls to Xdelay() - Initial support for memory hotplug - Tweak the threshold when we invalidate the TLB by-ASID, so that mremap() performance is improved for ranges spanning multiple PMDs. - Minor refactoring and cleanups" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (125 commits) arm64: kaslr: print PHYS_OFFSET in dump_kernel_offset() arm64: sysreg: Use _BITUL() when defining register bits arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matches arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4 arm64: docs: document pointer authentication arm64: ptr auth: Move per-thread keys from thread_info to thread_struct arm64: enable pointer authentication arm64: add prctl control for resetting ptrauth keys arm64: perf: strip PAC when unwinding userspace arm64: expose user PAC bit positions via ptrace arm64: add basic pointer authentication support arm64/cpufeature: detect pointer authentication arm64: Don't trap host pointer auth use to EL2 arm64/kvm: hide ptrauth from guests arm64/kvm: consistently handle host HCR_EL2 flags arm64: add pointer authentication register bits arm64: add comments about EC exception levels arm64: perf: Treat EXCLUDE_EL* bit definitions as unsigned arm64: kpti: Whitelist Cortex-A CPUs that don't implement the CSV3 field arm64: enable per-task stack canaries ...
| * arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matchesWill Deacon2018-12-131-43/+52
| | | | | | | | | | | | | | | | | | | | | | Open-coding the pointer-auth HWCAPs is a mess and can be avoided by reusing the multi-cap logic from the CPU errata framework. Move the multi_entry_cap_matches code to cpufeature.h and reuse it for the pointer auth HWCAPs. Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4Will Deacon2018-12-131-10/+1
| | | | | | | | | | | | | | | | | | | | We can easily avoid defining the two meta-capabilities for the address and generic keys, so remove them and instead just check both of the architected and impdef capabilities when determining the level of system support. Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: add basic pointer authentication supportMark Rutland2018-12-131-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds basic support for pointer authentication, allowing userspace to make use of APIAKey, APIBKey, APDAKey, APDBKey, and APGAKey. The kernel maintains key values for each process (shared by all threads within), which are initialised to random values at exec() time. The ID_AA64ISAR1_EL1.{APA,API,GPA,GPI} fields are exposed to userspace, to describe that pointer authentication instructions are available and that the kernel is managing the keys. Two new hwcaps are added for the same reason: PACA (for address authentication) and PACG (for generic authentication). Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Tested-by: Adam Wallis <awallis@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ramana Radhakrishnan <ramana.radhakrishnan@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> [will: Fix sizeof() usage and unroll address key initialisation] Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64/cpufeature: detect pointer authenticationMark Rutland2018-12-131-0/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | So that we can dynamically handle the presence of pointer authentication functionality, wire up probing code in cpufeature.c. From ARMv8.3 onwards, ID_AA64ISAR1 is no longer entirely RES0, and now has four fields describing the presence of pointer authentication functionality: * APA - address authentication present, using an architected algorithm * API - address authentication present, using an IMP DEF algorithm * GPA - generic authentication present, using an architected algorithm * GPI - generic authentication present, using an IMP DEF algorithm This patch checks for both address and generic authentication, separately. It is assumed that if all CPUs support an IMP DEF algorithm, the same algorithm is used across all CPUs. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: kpti: Whitelist Cortex-A CPUs that don't implement the CSV3 fieldWill Deacon2018-12-131-0/+6
| | | | | | | | | | | | | | | | | | | | | | While the CSV3 field of the ID_AA64_PFR0 CPU ID register can be checked to see if a CPU is susceptible to Meltdown and therefore requires kpti to be enabled, existing CPUs do not implement this field. We therefore whitelist all unaffected Cortex-A CPUs that do not implement the CSV3 field. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Add support for SB barrier and patch in over DSB; ISB sequencesWill Deacon2018-12-061-0/+12
| | | | | | | | | | | | | | | | | | | | | | We currently use a DSB; ISB sequence to inhibit speculation in set_fs(). Whilst this works for current CPUs, future CPUs may implement a new SB barrier instruction which acts as an architected speculation barrier. On CPUs that support it, patch in an SB; NOP sequence over the DSB; ISB sequence and advertise the presence of the new instruction to userspace. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: capabilities: Batch cpu_enable callbacksSuzuki K Poulose2018-12-061-26/+44
| | | | | | | | | | | | | | | | | | | | | | | | We use a stop_machine call for each available capability to enable it on all the CPUs available at boot time. Instead we could batch the cpu_enable callbacks to a single stop_machine() call to save us some time. Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: capabilities: Use linear array for detection and verificationSuzuki K Poulose2018-12-061-25/+17
| | | | | | | | | | | | | | | | | | | | Use the sorted list of capability entries for the detection and verification. Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: capabilities: Optimize this_cpu_has_capSuzuki K Poulose2018-12-061-22/+9
| | | | | | | | | | | | | | | | | | | | | | Make use of the sorted capability list to access the capability entry in this_cpu_has_cap() to avoid iterating over the two tables. Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: capabilities: Speed up capability lookupSuzuki K Poulose2018-12-061-2/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We maintain two separate tables of capabilities, errata and features, which decide the system capabilities. We iterate over each of these tables for various operations (e.g, detection, verification etc.). We do not have a way to map a system "capability" to its entry, (i.e, cap -> struct arm64_cpu_capabilities) which is needed for this_cpu_has_cap(). So we iterate over the table one by one to find the entry and then do the operation. Also, this prevents us from optimizing the way we "enable" the capabilities on the CPUs, where we now issue a stop_machine() for each available capability. One solution is to merge the two tables into a single table, sorted by the capability. But this is has the following disadvantages: - We loose the "classification" of an errata vs. feature - It is quite easy to make a mistake when adding an entry, unless we sort the table at runtime. So we maintain a list of pointers to the capability entry, sorted by the "cap number" in a separate array, initialized at boot time. The only restriction is that we can have one "entry" per capability. While at it, remove the duplicate declaration of arm64_errata table. Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | arm64: cpufeature: Fix mismerge of CONFIG_ARM64_SSBD blockWill Deacon2018-11-231-1/+1
|/ | | | | | | | | | | | | When merging support for SSBD and the CRC32 instructions, the conflict resolution for the new capability entries in arm64_features[] inadvertedly predicated the availability of the CRC32 instructions on CONFIG_ARM64_SSBD, despite the functionality being entirely unrelated. Move the #ifdef CONFIG_ARM64_SSBD down so that it only covers the SSBD capability. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: cpufeature: Fix handling of CTR_EL0.IDC fieldSuzuki K Poulose2018-10-161-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CTR_EL0.IDC reports the data cache clean requirements for instruction to data coherence. However, if the field is 0, we need to check the CLIDR_EL1 fields to detect the status of the feature. Currently we don't do this and generate a warning with tainting the kernel, when there is a mismatch in the field among the CPUs. Also the userspace doesn't have a reliable way to check the CLIDR_EL1 register to check the status. This patch fixes the problem by checking the CLIDR_EL1 fields, when (CTR_EL0.IDC == 0) and updates the kernel's copy of the CTR_EL0 for the CPU with the actual status of the feature. This would allow the sanity check infrastructure to do the proper checking of the fields and also allow the CTR_EL0 emulation code to supply the real status of the feature. Now, if a CPU has raw CTR_EL0.IDC == 0 and effective IDC == 1 (with overall system wide IDC == 1), we need to expose the real value to the user. So, we trap CTR_EL0 access on the CPU which reports incorrect CTR_EL0.IDC. Fixes: commit 6ae4b6e057888 ("arm64: Add support for new control bits CTR_EL0.DIC and CTR_EL0.IDC") Cc: Shanker Donthineni <shankerd@codeaurora.org> Cc: Philip Elcan <pelcan@codeaurora.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: cpufeature: ctr: Fix cpu capability check for late CPUsSuzuki K Poulose2018-10-161-4/+18
| | | | | | | | | | | | | | | | | | | | | | | The matches() routine for a capability must honor the "scope" passed to it and return the proper results. i.e, when passed with SCOPE_LOCAL_CPU, it should check the status of the capability on the current CPU. This is used by verify_local_cpu_capabilities() on a late secondary CPU to make sure that it's compliant with the established system features. However, ARM64_HAS_CACHE_{IDC/DIC} always checks the system wide registers and this could mean that a late secondary CPU could return "true" (since the CPU hasn't updated the system wide registers yet) and thus lead the system in an inconsistent state, where the system assumes it has IDC/DIC feature, while the new CPU doesn't. Fixes: commit 6ae4b6e0578886eb36 ("arm64: Add support for new control bits CTR_EL0.DIC and CTR_EL0.IDC") Cc: Philip Elcan <pelcan@codeaurora.org> Cc: Shanker Donthineni <shankerd@codeaurora.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/cpufeatures: Factorize emulate_mrs()Anshuman Khandual2018-09-211-10/+15
| | | | | | | | | | | | | | | | | MRS emulation gets triggered with exception class (0x00 or 0x18) eventually calling the function emulate_mrs() which fetches the user space instruction and analyses it's encodings (OP0, OP1, OP2, CRN, CRM, RT). The kernel tries to emulate the given instruction looking into the encoding details. Going forward these encodings can also be parsed from ESR_ELx.ISS fields without requiring to fetch/decode faulting userspace instruction which can improve performance. This factorizes emulate_mrs() function in a way that it can be called directly with MRS encodings (OP0, OP1, OP2, CRN, CRM) for any given target register which can then be used directly from 0x18 exception class. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: mm: Support Common Not Private translationsVladimir Murzin2018-09-181-0/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Common Not Private (CNP) is a feature of ARMv8.2 extension which allows translation table entries to be shared between different PEs in the same inner shareable domain, so the hardware can use this fact to optimise the caching of such entries in the TLB. CNP occupies one bit in TTBRx_ELy and VTTBR_EL2, which advertises to the hardware that the translation table entries pointed to by this TTBR are the same as every PE in the same inner shareable domain for which the equivalent TTBR also has CNP bit set. In case CNP bit is set but TTBR does not point at the same translation table entries for a given ASID and VMID, then the system is mis-configured, so the results of translations are UNPREDICTABLE. For kernel we postpone setting CNP till all cpus are up and rely on cpufeature framework to 1) patch the code which is sensitive to CNP and 2) update TTBR1_EL1 with CNP bit set. TTBR1_EL1 can be reprogrammed as result of hibernation or cpuidle (via __enable_mmu). For these two cases we restore CnP bit via __cpu_suspend_exit(). There are a few cases we need to care of changes in TTBR0_EL1: - a switch to idmap - software emulated PAN we rule out latter via Kconfig options and for the former we make sure that CNP is set for non-zero ASIDs only. Reviewed-by: James Morse <james.morse@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> [catalin.marinas@arm.com: default y for CONFIG_ARM64_CNP] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: sysreg: Clean up instructions for modifying PSTATE fieldsSuzuki K Poulose2018-09-171-3/+3
| | | | | | | | | | | | | | | | | | | | | | Instructions for modifying the PSTATE fields which were not supported in the older toolchains (e.g, PAN, UAO) are generated using macros. We have so far used the normal sys_reg() helper for defining the PSTATE fields. While this works fine, it is really difficult to correlate the code with the Arm ARM definition. As per Arm ARM, the PSTATE fields are defined only using Op1, Op2 fields, with fixed values for Op0, CRn. Also the CRm field has been reserved for the Immediate value for the instruction. So using the sys_reg() looks quite confusing. This patch cleans up the instruction helpers by bringing them in line with the Arm ARM definitions to make it easier to correlate code with the document. No functional changes. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: cpu: Move errata and feature enable callbacks closer to callersWill Deacon2018-09-141-6/+22
| | | | | | | | | | | | | The cpu errata and feature enable callbacks are only called via their respective arm64_cpu_capabilities structure and therefore shouldn't exist in the global namespace. Move the PAN, RAS and cache maintenance emulation enable callbacks into the same files as their corresponding arm64_cpu_capabilities structures, making them static in the process. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3Will Deacon2018-09-141-0/+45
| | | | | | | | | | | | | On CPUs with support for PSTATE.SSBS, the kernel can toggle the SSBD state without needing to call into firmware. This patch hooks into the existing SSBD infrastructure so that SSBS is used on CPUs that support it, but it's all made horribly complicated by the very real possibility of big/little systems that don't uniformly provide the new capability. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: cpufeature: Detect SSBS and advertise to userspaceWill Deacon2018-09-141-2/+17
| | | | | | | | | | | | | | | | Armv8.5 introduces a new PSTATE bit known as Speculative Store Bypass Safe (SSBS) which can be used as a mitigation against Spectre variant 4. Additionally, a CPU may provide instructions to manipulate PSTATE.SSBS directly, so that userspace can toggle the SSBS control without trapping to the kernel. This patch probes for the existence of SSBS and advertise the new instructions to userspace if they exist. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: cpufeature: add feature for CRC32 instructionsArd Biesheuvel2018-09-101-0/+9
| | | | | | | | | Add a CRC32 feature bit and wire it up to the CPU id register so we will be able to use alternatives patching for CRC32 operations. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* Merge tag 'kvmarm-for-v4.19' of ↵Paolo Bonzini2018-08-221-0/+20
|\ | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm updates for 4.19 - Support for Group0 interrupts in guests - Cache management optimizations for ARMv8.4 systems - Userspace interface for RAS, allowing error retrival and injection - Fault path optimization - Emulated physical timer fixes - Random cleanups
| * arm64: KVM: Add support for Stage-2 control of memory types and cacheabilityMarc Zyngier2018-07-091-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to ARMv8.3, the combinaison of Stage-1 and Stage-2 attributes results in the strongest attribute of the two stages. This means that the hypervisor has to perform quite a lot of cache maintenance just in case the guest has some non-cacheable mappings around. ARMv8.4 solves this problem by offering a different mode (FWB) where Stage-2 has total control over the memory attribute (this is limited to systems where both I/O and instruction fetches are coherent with the dcache). This is achieved by having a different set of memory attributes in the page tables, and a new bit set in HCR_EL2. On such a system, we can then safely sidestep any form of dcache management. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* | Merge tag 'arm64-upstream' of ↵Linus Torvalds2018-08-141-2/+2
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: "A bunch of good stuff in here. Worth noting is that we've pulled in the x86/mm branch from -tip so that we can make use of the core ioremap changes which allow us to put down huge mappings in the vmalloc area without screwing up the TLB. Much of the positive diffstat is because of the rseq selftest for arm64. Summary: - Wire up support for qspinlock, replacing our trusty ticket lock code - Add an IPI to flush_icache_range() to ensure that stale instructions fetched into the pipeline are discarded along with the I-cache lines - Support for the GCC "stackleak" plugin - Support for restartable sequences, plus an arm64 port for the selftest - Kexec/kdump support on systems booting with ACPI - Rewrite of our syscall entry code in C, which allows us to zero the GPRs on entry from userspace - Support for chained PMU counters, allowing 64-bit event counters to be constructed on current CPUs - Ensure scheduler topology information is kept up-to-date with CPU hotplug events - Re-enable support for huge vmalloc/IO mappings now that the core code has the correct hooks to use break-before-make sequences - Miscellaneous, non-critical fixes and cleanups" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (90 commits) arm64: alternative: Use true and false for boolean values arm64: kexec: Add comment to explain use of __flush_icache_range() arm64: sdei: Mark sdei stack helper functions as static arm64, kaslr: export offset in VMCOREINFO ELF notes arm64: perf: Add cap_user_time aarch64 efi/libstub: Only disable stackleak plugin for arm64 arm64: drop unused kernel_neon_begin_partial() macro arm64: kexec: machine_kexec should call __flush_icache_range arm64: svc: Ensure hardirq tracing is updated before return arm64: mm: Export __sync_icache_dcache() for xen-privcmd drivers/perf: arm-ccn: Use devm_ioremap_resource() to map memory arm64: Add support for STACKLEAK gcc plugin arm64: Add stack information to on_accessible_stack drivers/perf: hisi: update the sccl_id/ccl_id when MT is supported arm64: fix ACPI dependencies rseq/selftests: Add support for arm64 arm64: acpi: fix alignment fault in accessing ACPI efi/arm: map UEFI memory map even w/o runtime services enabled efi/arm: preserve early mapping of UEFI memory map longer for BGRT drivers: acpi: add dependency of EFI for arm64 ...
| * | arm64: use PSR_AA32 definitionsMark Rutland2018-07-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some code cares about the SPSR_ELx format for exceptions taken from AArch32 to inspect or manipulate the SPSR_ELx value, which is already in the SPSR_ELx format, and not in the AArch32 PSR format. To separate these from cases where we care about the AArch32 PSR format, migrate these cases to use the PSR_AA32_* definitions rather than COMPAT_PSR_*. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: Fix mismatched cache line size detectionSuzuki K Poulose2018-07-051-1/+1
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If there is a mismatch in the I/D min line size, we must always use the system wide safe value both in applications and in the kernel, while performing cache operations. However, we have been checking more bits than just the min line sizes, which triggers false negatives. We may need to trap the user accesses in such cases, but not necessarily patch the kernel. This patch fixes the check to do the right thing as advertised. A new capability will be added to check mismatches in other fields and ensure we trap the CTR accesses. Fixes: be68a8aaf925 ("arm64: cpufeature: Fix CTR_EL0 field definitions") Cc: <stable@vger.kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* / arm64: Check for errata before evaluating cpu featuresDirk Mueller2018-07-251-2/+2
|/ | | | | | | | | | | | | | | | | | | Since commit d3aec8a28be3b8 ("arm64: capabilities: Restrict KPTI detection to boot-time CPUs") we rely on errata flags being already populated during feature enumeration. The order of errata and features was flipped as part of commit ed478b3f9e4a ("arm64: capabilities: Group handling of features and errata workarounds"). Return to the orginal order of errata and feature evaluation to ensure errata flags are present during feature evaluation. Fixes: ed478b3f9e4a ("arm64: capabilities: Group handling of features and errata workarounds") CC: Suzuki K Poulose <suzuki.poulose@arm.com> CC: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Dirk Mueller <dmueller@suse.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: kpti: Use early_param for kpti= command-line optionWill Deacon2018-06-221-1/+1
| | | | | | | | | | | | | | | | We inspect __kpti_forced early on as part of the cpufeature enable callback which remaps the swapper page table using non-global entries. Ensure that __kpti_forced has been updated to reflect the kpti= command-line option before we start using it. Fixes: ea1e3de85e94 ("arm64: entry: Add fake CPU feature for unmapping the kernel at EL0") Cc: <stable@vger.kernel.org> # 4.16.x- Reported-by: Wei Xu <xuwei5@hisilicon.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Tested-by: Wei Xu <xuwei5@hisilicon.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>