summaryrefslogtreecommitdiffstats
path: root/arch/arm64
diff options
context:
space:
mode:
authorMark Rutland <mark.rutland@arm.com>2013-04-30 11:11:15 +0100
committerCatalin Marinas <catalin.marinas@arm.com>2013-04-30 15:53:01 +0100
commitc47d6a04e6ed22ccc5d89aaf2a136bf4971de310 (patch)
tree93e33762f824a13ff3a9abdee0fb499db0d1dfc9 /arch/arm64
parent1ae90e79051318c34d5a75c2ef5b9a55bd22f2ed (diff)
downloadlinux-c47d6a04e6ed22ccc5d89aaf2a136bf4971de310.tar.gz
linux-c47d6a04e6ed22ccc5d89aaf2a136bf4971de310.tar.bz2
linux-c47d6a04e6ed22ccc5d89aaf2a136bf4971de310.zip
arm64: klib: bitops: fix unpredictable stxr usage
We're currently relying on unpredictable behaviour in our testops (test_and_*_bit), as stxr is unpredictable when the status register and the source register are the same This patch changes reallocates the status register so as to bring us back into the realm of predictable behaviour. Boot tested on an AEMv8 model. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'arch/arm64')
-rw-r--r--arch/arm64/lib/bitops.S4
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/arm64/lib/bitops.S b/arch/arm64/lib/bitops.S
index fd1e801b53e7..eaed8bbd78fc 100644
--- a/arch/arm64/lib/bitops.S
+++ b/arch/arm64/lib/bitops.S
@@ -50,8 +50,8 @@ ENTRY( \name )
1: ldxr x2, [x1]
lsr x0, x2, x3 // Save old value of bit
\instr x2, x2, x4 // toggle bit
- stxr w2, x2, [x1]
- cbnz w2, 1b
+ stxr w5, x2, [x1]
+ cbnz w5, 1b
smp_dmb ish
and x0, x0, #1
3: ret