diff options
author | Ard Biesheuvel <ardb@kernel.org> | 2024-04-19 19:39:32 +0200 |
---|---|---|
committer | mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> | 2024-04-22 13:05:21 +0000 |
commit | 7dd7b890582b4d696ca5fd436dbc5fb4bc30e385 (patch) | |
tree | fc9af8c5082ec94280854d837ad5997b2baf94ce /ArmPlatformPkg | |
parent | f29160a89699ddbe3dbc03d29857fd6fa2719e8e (diff) | |
download | edk2-7dd7b890582b4d696ca5fd436dbc5fb4bc30e385.tar.gz edk2-7dd7b890582b4d696ca5fd436dbc5fb4bc30e385.tar.bz2 edk2-7dd7b890582b4d696ca5fd436dbc5fb4bc30e385.zip |
ArmVirtPkg/ArmVirtQemu: always build XIP code with strict alignment
The optimization that enabled entry with MMU and caches enabled at EL1
removed the strict alignment requirement for XIP code (roughly, any code
that might execute with the MMU and caches off, which means SEC and PEI
phase modules but also *all* BASE libraries), on the basis that QEMU can
only run guest payloads at EL2 in TCG emulation, which used to ignore
alignment violations, and execution at EL1 would always occur with the
MMU enabled.
This assumption no longer holds: not only does QEMU now enforce strict
alignment for memory accesses with device semantics, there are also
cases where this code might execute at EL2 under virtualization (i.e.,
under NV2 nested virtualization) where the strict alignment is required
too.
The latter case could be optimized too, by enabling VHE and pretending
execution is occurring at EL1, which would allow the existing logic for
entry with the MMU enabled to be reused. However, this would leave
non-VHE CPUs behind.
So in summary, strict alignment needs to be enforced for any code that
may execute with the MMU off, so drop the override that sets the XIP
flags to the empty string.
Cc: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Diffstat (limited to 'ArmPlatformPkg')
0 files changed, 0 insertions, 0 deletions