| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Fix various typos in ArmPkg.
Signed-off-by: Coeur <coeur@gmx.fr>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
https://bugzilla.tianocore.org/show_bug.cgi?id=1373
Replace BSD 2-Clause License with BSD+Patent License. This change is
based on the following emails:
https://lists.01.org/pipermail/edk2-devel/2019-February/036260.html
https://lists.01.org/pipermail/edk2-devel/2018-October/030385.html
RFCs with detailed process for the license change:
V3: https://lists.01.org/pipermail/edk2-devel/2019-March/038116.html
V2: https://lists.01.org/pipermail/edk2-devel/2019-March/037669.html
V1: https://lists.01.org/pipermail/edk2-devel/2019-March/037500.html
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, we always invalidate the TLBs entirely after making
any modification to the page tables. Now that we have introduced
strict memory permissions in quite a number of places, such
modifications occur much more often, and it is better for performance
to flush only those TLB entries that are actually affected by
the changes.
At the same time, relax some system wide data synchronization barriers
to non-shared. When running in UEFI, we don't share virtual address
translations with other masters, unless we are running under virt, but
in that case, the host will upgrade them as appropriate (by setting
an override at EL2)
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
| |
Add a helper function that returns the maximum physical address space
size as supported by the current CPU.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
|
| |
This currently isn't needed by anything in the edk2 tree but
it's useful for externally maintained platforms which have
to set this register e.g. to disable alignment aborts.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Michael Zimmermann <sigmaepsilon92@gmail.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ARMv8.2-FP16 extension introduces support for half precision
floating point and the processor ID registers have been updated to
enable detection of the implementation.
The possible values for the FP bits in ID_AA64PFR0_EL1[19:16] are:
- 0000 : Floating-point is implemented.
- 0001 : Floating-point including Half-precision support is
implemented.
- 1111 : Floating-point is not implemented.
- All other values are reserved.
Previously ArmEnableVFP() compared the FP bits with 0000b to see if
the FP was implemented, before enabling FP. Modified this check to
enable the FP if the FP bits 19:16 are not 1111b.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Sami Mujawar <sami.mujawar@arm.com>
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
|
| |
Added helper functions for reading and writing the
CNTHCTL_EL2 register.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Sami Mujawar <sami.mujawar@arm.com>
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
| |
In preparation of enabling stack alignment checking, which is mandated
by the UEFI spec for AARCH64, add the code to manage this bit to ArmLib.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
| |
Stack and unstack the frame pointer according to the AAPCS in
AArch64AllDataCachesOperation ().
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The generic timer support libraries call the actual system register
accessor function via a single pair of functions ArmArchTimerReadReg()
and ArmArchTimerWriteReg(), which take an enum argument to identify
the register, and return output values by pointer reference.
Since these functions are never called with a non-immediate argument,
we can simply replace each invocation with the underlying system register
accessor instead. This is mostly functionally equivalent, with the
exception of the bounds check for the enum (which is pointless given the
fact that we never pass a variable), the check for the presence of the
architected timer (which only makes sense for ARMv7, but is highly unlikely
to vary between platforms that are similar enough to run the same firmware
image), and a check for enum values that refer to the HYP view of the timer,
which we never referred to anywhere in the code in the first place.
So get rid of the middle man, and update the ArmGenericTimerPhyCounterLib
and ArmGenericTimerVirtCounterLib implementations to call the system
register accessors directly.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For historical reasons, the files under ArmLib are split up into 'common'
files under Common/, containing common C files as well as AArch64 and Arm
specific asm files, and ArmV7 and AArch64 files under ArmV7/ and AArch64/,
respectively. This presumably dates back to the time when ArmLib supported
different revisions of the 32-bit architecture (i.e., pre-V7)
Since the PI spec requires V7 or later, we can simplify this to Arm/ and
AArch64, which aligns ArmLib with the majority of other modules that carry
ARM or AArch64 specific code.
So move the files around so that shared files live at the same level as
ArmBaseLib.inf, and ARM/AArch64 specific files live in Arm/ or AArch64/,
respectively.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
| |
The ArmBaseLib timer code does not depend on MemoryAllocationLib at
all, so remove the #includes referring to it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This removes the following ArmLib implementation, which were, apart from
the fact that they targeted either ARM or AARCH64, fully identical:
ArmPkg/Library/ArmLib/AArch64/AArch64Lib.inf
ArmPkg/Library/ArmLib/AArch64/AArch64LibPei.inf
ArmPkg/Library/ArmLib/AArch64/AArch64LibPrePi.inf
ArmPkg/Library/ArmLib/AArch64/AArch64LibSec.inf
ArmPkg/Library/ArmLib/ArmV7/ArmV7Lib.inf
ArmPkg/Library/ArmLib/ArmV7/ArmV7LibPrePi.inf
ArmPkg/Library/ArmLib/ArmV7/ArmV7LibSec.inf
Only ArmBaseLib remains, which can fulfil the dependencies upon each of
the listed flavors.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
| |
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function ArmReplaceLiveTranslationEntry() has been moved to
ArmMmuLib, so remove the old implementation from ArmLib.
Note that the new implementation was not exported from the object file,
and so references to it were satisfied by the old version residing in
ArmLib. Since we are removing that one, we need to export the new one
at the same time to prevent the linker from bailing with undefined
reference errors.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Switch all users of ArmLib that depend on the MMU routines to the new,
separate ArmMmuLib. This needs to occur in one go, since the MMU
routines are removed from ArmLib build at the same time, to prevent
conflicting symbols.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On some platforms, performing cache maintenance on regions that are backed
by NOR flash result in SErrors. Since cache maintenance is unnecessary in
that case, create a PEIM specific version that only performs said cache
maintenance in its constructor if the module is shadowed in RAM. To avoid
performing the cache maintenance if the MMU code is not used to begin with,
check that explicitly in the constructor.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Heyi Guo <heyi.guo@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of cleaning the data cache to the PoU by virtual address and
subsequently invalidating the entire I-cache, invalidate only the
range that we just cleaned. This way, we don't invalidate other
cachelines unnecessarily.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we split a block entry into a table entry, the UXN/PXN/XN permission
attributes are inherited both by the new table entry and by the new block
entries at the next level down. Unlike the NS bit, which only affects the
next level of lookup, the XN table bits supersede the permissions of the
final translation, and setting the permissions at multiple levels is not
only redundant, it also prevents us from lifting XN restrictions on a
subregion of the original block entry by simply clearing the appropriate
bits at the lowest level.
So drop the code that sets the UXN/PXN/XN bits on the table entries.
Reported-by: "Oliyil Kunnil, Vishal" <vishalo@qti.qualcomm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On ARM, manipulating live page tables is cumbersome since the architecture
mandates the use of break-before-make, i.e., replacing a block entry with
a table entry requires an intermediate step via an invalid entry, or TLB
conflicts may occur.
Since it is not generally feasible to decide in the page table manipulation
routines whether such an invalid entry will result in those routines
themselves to become unavailable, use a function that is callable with
the MMU off (i.e., a leaf function that does not access the stack) to
perform the change of a block entry into a table entry.
Note that the opposite should never occur, i.e., table entries are never
coalesced into block entries.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now XN attribute will be set automatically if the region is declared
as device memory. However, the function ArmMemoryAttributeToPageAttribute
is to get attribute for block and page descriptors, not for table
descriptors, so attribute TT_TABLE_*XN does not really take effect.
Need to use TT_*XN_MASK instead.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function ArmClearMemoryRegionReadOnly() was supposed to undo the
effect of ArmSetMemoryRegionReadOnly(), but instead, it sets the permissions
to EL0-no access, EL1-read-only. Since the EL0 bit should be 1 to align
with EL2/3 (where the bit is SBO), use TT_AP_RW_RW instead, which makes the
entry read-write for EL0 when executing at EL1, and read-write for all other
levels.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
|
|
|
|
|
|
|
|
|
| |
Add ArmReadHcr() to ArmLib to enable read-modify-write of the HCR system
register.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The AArch64 DAIF bits are different for reading (mrs) versus writing
(msr). The bitmask definitions assumed they were the same causing
incorrect results when trying to determine the current interrupt
state through ArmGetInterruptState.
The logic for interpreting the DAIF read data using the csel instruction
was also incorrect and is fixed.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch updates the ArmPkg variant of InvalidateInstructionCacheRange to
flush the data cache only to the point of unification (PoU). This improves
performance and also allows invalidation in scenarios where it would be
inappropriate to flush to the point of coherency (like when executing code
from L2 configured as cache-as-ram).
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Added AARCH64 and ARM/GCC implementations of the above.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19174 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In ArmLib, there exists an alias for ArmDataSynchronizationBarrier,
named after one of several names for the pre-ARMv6 cp15 operation that
was formalised into the Data Synchronization Barrier in ARMv6.
This alias is also the one called from within ArmLib, in preference of
the correct name. Through the power of code reuse, this name slipped
into the AArch64 variant as well.
Expunge it from the codebase.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18915 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As EDK2 runs in an idmap, we do not use TTBR1_EL1, nor do we configure
it. TTBR1_EL1 may contain UNKNOWN values if it is not programmed since
reset.
Prior to enabling the MMU, we do not set TCR_EL1.EPD1, and hence the CPU
may make page table walks via TTBR1_EL1 at any time, potentially using
UNKNOWN values. This can result in a number of potential problems (e.g.
the CPU may load from MMIO registers as part of a page table walk).
Additionally, in the presence of Cortex-A57 erratum #822227, we must
program TCR_EL1.TG1 == 0b1x (e.g. 4KB granule) regardless of the value
of TCR_EL1.EPD1, to ensure that EDK2 can make forward progress under a
hypervisor which makes use of PAR_EL1.
This patch ensures that we program TCR_EL1.EPD1 and TCR_EL1.TG1 as above
to avoid these issues. TCR_EL1.TG1 is set to 4K for all targets, as any
CPU capable of running EDK2 must support this granule, and given
TCR_EL1.EPD1, programming the field is not detrimental in the absence of
the erratum.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18903 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To prevent speculative intruction fetches from MMIO ranges that may
have side effects on reads, the architecture requires device mappings
to be created with the XN or UXN/PXN bits set (for the ARM/EL2 and
EL1&0 translation regimes, respectively.)
Note that, in the ARM case, this involves moving all accesses to a
client domain since permission attributes like XN are ignored from
a manager domain. The use of a client domain is actually mandated
explicitly by the UEFI spec.
Reported-by: Heyi Guo <heyi.guo@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18891 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function GcdAttributeToArmAttribute() is not used anywhere in the
code base, and is only defined for AARCH64 and not for ARM. It also
fails to set the bits for shareability and non-executability that we
require for correct operation. So remove it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18888 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mark all cached memory mappings as shareable (or inner shareable on
AArch64) so that our view of memory is kept coherent by the hardware.
This is relevant for things like coherent DMA and virtualization (where
a guest may migrate to another core) but in general, since UEFI on ARM
is mostly used in a context where the secure firmware and possibly a
secure OS are already up and running, it is best to refrain from using
any non-shareable mappings.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18778 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is no need to issue a full data synchronization barrier and an
instruction synchronization barrier after each and every set/way or
MVA cache maintenance operation. For the set/way case, we can simply
remove them, since the set/way outer loop already issues the required
barriers after completing its traversal over all the cache levels.
For the MVA case, move the data synchronization barrier out of the
loop, and add the instruction synchronization barrier to the I-cache
invalidation by MVA routine.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18755 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The stride used by the cache maintenance by MVA instructions should
be retrieved from CTR_EL0.DminLine and CTR_EL0.IminLine, whose values
reflect the actual geometry of the caches. Using CCSIDR for this purpose
violates the architecture.
Also, move the line length accessors to common code, since there is no
need to keep them separate between ARMv7 and AArch64.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18754 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ARM architecture does not allow the actual geometries of the caches
to be inferred from the CCSIDR cache info system register, since the
geometry it reports is intended for performing cache maintenance by
set/way and nothing else. Since the ArmLib cache info routines are
based solely on CCSIDR contents, they should not be used.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18753 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function ArmCleanDataCacheToPoU() has no users, and its purpose
is unclear, since it uses cache maintenance by set/way to perform
the clean to PoU, which is a dubious practice to begin with. So
remove the declaration and all definitions.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18752 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace all instances of ArmDataSyncronizationBarrier with
ArmDataSynchronizationBarrier.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18751 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ARM architecture requires a DSB to complete TLB maintenance, with a
subsequent ISB being required to synchronize subsequent items in the
current instruction stream against the completed TLB maintenance.
The ArmEnableMmu function is currently missing the DSB, and hence the
TLB maintenance is not guaranteed to have completed at the point the MMU
is enabled. This may result in unpredictable behaviour.
The DSB subsequent to the write to SCTLR_EL1 is unnecessary; the ISB
alone is sufficient to complete all prior instructions and to
synchronise the new context with any subsequent instructions.
This patch adds missing DSBs to complete TLB maintenance, and removes
the unnecessary trailing DSB.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18749 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the refactored UpdateRegionMapping () to traverse the translation
tables, splitting block entries along the way if required, and apply
a mask + or on each to set or clear the PXN/UXN/XN or RO bits.
For now, the 32-bit ARM versions remain unimplemented.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18587 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the page table traversal and splitting logic to a separate function
UpdateRegionMapping() and refactor it slightly so we can reuse it later to
implement non-executable regions, for the stack. This primarly involves
adding a value/mask pair to the function prototype that allows us to flip
arbitrary bits on each block entry as the page tables are traversed.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18586 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The non-privileged execute never (UXN) page table bit defined for the
EL1&0 translation regime and the execute never (XN) bit defined for the
EL2 and EL3 translation regimes happen to share the same bit position,
but they are in fact defined distinctly by the architecture. So define
both bits explicitly, and add comments in places where we take advantage
of the fact that they share the same bit position.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18585 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All our page tables are allocated from memory whose cacheability
attributes are inherited by the cacheability bits in the MMU control
register, so there is no need for explicit cache maintenance after
updating the page tables. And even if there were, Set/Way operations
are not appropriate anyway for ensuring that these changes make it to
main memory. So just remove the explicit cache maintenance completely.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18570 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that the AArch64 MMU code correctly identifies and handles
naturally aligned regions of more than 2 MB in size, it will happily
try to use block mappings at level 0 to map huge memory regions, such
as the single cacheable 1:1 mapping we use for Xen domU to map the
entire PA space. However, block mappings are not supported at level 0
so the resulting translation tables will be incorrect, causing
execution to fail as soon as the MMU is enabled.
So use level 1 as the minimum level at which to perform block
translations.
Reported-by: Julien Grall <julien.grall@citrix.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18568 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During page entry attribute update, if there are table entries
between starting BlockEntry and LastBlockEntry, table entries will be
set as block entries and the allocated memory of the tables will be
leaked.
So instead, we break the inner loop when we find a table entry and run
outer loop again to step into the table by the same logic.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
[ardb: move termination condition check inside the loop]
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18425 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Below code has bug since *BlockEntrySize and *TableLevel are not
updated accordingly:
if (IndexLevel == PageLevel) {
// And get the appropriate BlockEntry at the next level
BlockEntry = (UINT64*)TT_GET_ENTRY_FOR_ADDRESS (TranslationTable, \
IndexLevel + 1, RegionStart);
// Set the last block for this new table
*LastBlockEntry = TT_LAST_BLOCK_ADDRESS(TranslationTable, \
TT_ENTRY_COUNT);
}
Also it doesn't check recursively to get the last level, e.g. the
initial PageLevel is 1 and we already have level 2 and 3 tables at
this address.
What's more, *LastBlockEntry was not updated when we get a table and
IndexLevel != PageLevel.
So we reorganize the sequence, only updating TranslationTable,
PageLevel and BlockEntry in the loop, and setting the other output
parameters with the final PageLevel before returning.
And LastBlockEntry is only an OUT parameter.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18424 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a hidden bug for below code:
(1 << BaseAddressAlignment) & *BlockEntrySize
From disassembly code, we can see the literal number 1 will be treated
as INT32 by compiler by default, and we'll get 0xFFFFFFFF80000000 when
BaseAddressAlignment is equal to 31. So we will always get 31 when
alignment is larger than 31.
if ((1 << BaseAddressAlignment) & *BlockEntrySize) {
5224: f9404be0 ldr x0, [sp,#144]
5228: 2a0003e1 mov w1, w0
522c: 52800020 mov w0, #0x1 // #1
5230: 1ac12000 lsl w0, w0, w1
5234: 93407c01 sxtw x1, w0
The bug can be replayed on QEMU AARCH64; by adding some debug print,
we can see lots of level 1 tables created (for block of 1GB) even
when the region is large enough to use 512GB block size.
Use LowBitSet64() in BaseLib instead to fix the bug.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18423 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The bug can be triggered when alignment of Base is larger than Length
by 2 level of page granularity, e.g.
Base is 0x4000_0000, Length is 0x1000
The original code will change 2MB page level and we will get a
negative remaining length.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18422 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code has a simple bug on calculating aligned page table address.
We can just use AllocateAlignedPages in MemoryAllocationLib instead.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18421 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
| |
We need to use msr instruction to write system register. It seems the
code was simply copied from ArmReadCntkCtl.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17440 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This removes the range size threshold for virtual address based cache
maintenance instructions that operate on VA ranges to be 'promoted' to
use set/way instructions.
Doing so is unsafe: set/way operations are fundamentally different
from VA operations, and really only suitable for cleaning or invalidating
a cache when turning it on or off.
To quote the ARM ARM (DDI0487A_d G3.4):
"""
Since the set/way instructions are performed only locally, there is no
guarantee of the atomicity of cache maintenance between different PEs,
even if those different PEs are each performing the same cache maintenance
instructions at the same time. Since any cacheable line can be allocated
into the cache at any time, it is possible for [a] cache line to migrate
from an entry in the cache of one PE to the cache of a different PE in a
manner that the cache line avoids being affected by set/way based cache
maintenance. Therefore, ARM strongly discourages the use of set/way
instructions to manage coherency in coherent systems.
"""
Contributed-under: TianoCore Contribution Agreement 1.0
Reviewed-by: Olivier Martin <Olivier.Martin@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17176 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From the AArch64 Procedure Call Standard (ARM IHI 0055B):
5.2.2.1 Universal stack constraints
At all times the following basic constraints must hold:
- SP mod 16 = 0. The stack must be quad-word aligned.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Olivier Martin <olivier.martin@arm.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@16327 6f19259b-4bc3-4df7-8a09-765794883524
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ArmInvalidateInstructionAndDataTlb() was doing the same thing as
ArmInvalidateTlb().
Both invalidate Data and Instruction TLBs.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Olivier Martin <olivier.martin@arm.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@16253 6f19259b-4bc3-4df7-8a09-765794883524
|