summaryrefslogtreecommitdiffstats
path: root/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
diff options
context:
space:
mode:
authorArd Biesheuvel <ard.biesheuvel@linaro.org>2019-01-07 08:15:01 +0100
committerArd Biesheuvel <ard.biesheuvel@linaro.org>2019-01-29 11:24:02 +0100
commitd5788777bcc75936cc0e6acb540a5ee6ac77866b (patch)
tree3662febef191bf6e83feb297e1462169e03555c5 /ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
parentf34b38fae614c096d9c6afdc02b8448ff38134cd (diff)
downloadedk2-d5788777bcc75936cc0e6acb540a5ee6ac77866b.tar.gz
edk2-d5788777bcc75936cc0e6acb540a5ee6ac77866b.tar.bz2
edk2-d5788777bcc75936cc0e6acb540a5ee6ac77866b.zip
ArmPkg/ArmMmuLib AARCH64: get rid of needless TLB invalidation
Currently, we always invalidate the TLBs entirely after making any modification to the page tables. Now that we have introduced strict memory permissions in quite a number of places, such modifications occur much more often, and it is better for performance to flush only those TLB entries that are actually affected by the changes. At the same time, relax some system wide data synchronization barriers to non-shared. When running in UEFI, we don't share virtual address translations with other masters, unless we are running under virt, but in that case, the host will upgrade them as appropriate (by setting an override at EL2) Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Diffstat (limited to 'ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S')
-rw-r--r--ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S6
1 files changed, 3 insertions, 3 deletions
diff --git a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
index b7173e00b0..175fb58206 100644
--- a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
+++ b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
@@ -124,15 +124,15 @@ ASM_FUNC(ArmSetMAIR)
// IN VOID *MVA // X1
// );
ASM_FUNC(ArmUpdateTranslationTableEntry)
- dc civac, x0 // Clean and invalidate data line
- dsb sy
+ dsb nshst
+ lsr x1, x1, #12
EL1_OR_EL2_OR_EL3(x0)
1: tlbi vaae1, x1 // TLB Invalidate VA , EL1
b 4f
2: tlbi vae2, x1 // TLB Invalidate VA , EL2
b 4f
3: tlbi vae3, x1 // TLB Invalidate VA , EL3
-4: dsb sy
+4: dsb nsh
isb
ret