summaryrefslogtreecommitdiffstats
path: root/arch/arm64/include/asm/atomic_lse.h
Commit message (Expand)AuthorAgeFilesLines
* arch: Remove cmpxchg_doublePeter Zijlstra2023-06-051-36/+0
* arch: Introduce arch_{,try_}_cmpxchg128{,_local}()Peter Zijlstra2023-06-051-0/+31
* arm64: atomics: lse: improve cmpxchg implementationMark Rutland2023-03-281-12/+5
* arm64: cmpxchg_double*: hazard against entire exchange variableMark Rutland2023-01-051-1/+1
* arm64: atomic: always inline the assemblyMark Rutland2022-09-091-17/+29
* arm64: atomics: lse: Dereference matching sizeKees Cook2022-01-201-1/+1
* arm64: atomics: lse: define RETURN ops in terms of FETCH opsMark Rutland2021-12-141-34/+14
* arm64: atomics: lse: improve constraints for simple opsMark Rutland2021-12-141-12/+18
* arm64: atomics: lse: define ANDs in terms of ANDNOTsMark Rutland2021-12-141-30/+4
* arm64: atomics lse: define SUBs in terms of ADDsMark Rutland2021-12-141-122/+58
* arm64: atomics: format whitespace consistentlyMark Rutland2021-12-141-7/+7
* arm64: lse: fix LSE atomics with LLVM's integrated assemblerSami Tolvanen2020-01-161-0/+19
* arm64: Mark functions using explicit register variables as '__always_inline'Will Deacon2019-10-041-2/+4
* arm64: avoid using hard-coded registers for LSE atomicsAndrew Murray2019-08-291-29/+41
* arm64: atomics: avoid out-of-line ll/sc atomicsAndrew Murray2019-08-291-251/+114
* Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/k...Linus Torvalds2019-07-081-17/+17
|\
| * locking/atomic, arm64: Use s64 for atomic64Mark Rutland2019-06-031-17/+17
* | treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner2019-06-191-12/+1
|/
* Merge branch 'locking/atomics' into locking/core, to pick up WIP commitsIngo Molnar2019-02-111-19/+19
|\
| * arm64, locking/atomics: Use instrumented atomicsMark Rutland2018-11-011-19/+19
* | arm64: Avoid masking "old" for LSE cmpxchg() implementationWill Deacon2018-12-071-2/+2
* | arm64: Avoid redundant type conversions in xchg() and cmpxchg()Will Deacon2018-12-071-23/+23
|/
* arm64: lse: Add early clobbers to some input/output asm operandsWill Deacon2018-05-211-12/+12
* arm64: atomics: Remove '&' from '+&' asm constraint in lse atomicsWill Deacon2017-07-201-1/+1
* arm64: atomic_lse: match asm register sizesMark Rutland2017-05-091-2/+2
* arm64: lse: convert lse alternatives NOP padding to use __nopsWill Deacon2016-09-091-37/+27
* locking/atomic, arch/arm64: Implement atomic{,64}_fetch_{add,sub,and,andnot,o...Will Deacon2016-06-161-0/+172
* locking/atomic, arch/arm64: Generate LSE non-return cases using common macrosWill Deacon2016-06-161-90/+32
* arm64: lse: deal with clobbered IP registers after branch via PLTArd Biesheuvel2016-02-261-19/+19
* arm64: cmpxchg_dbl: fix return value typeLorenzo Pieralisi2015-11-051-1/+1
* arm64: atomics: implement native {relaxed, acquire, release} atomicsWill Deacon2015-10-121-77/+116
* arm64: lse: fix lse cmpxchg code indentationWill Deacon2015-07-291-3/+3
* arm64: atomic64_dec_if_positive: fix incorrect branch conditionWill Deacon2015-07-271-1/+1
* arm64: atomics: implement atomic{,64}_cmpxchg using cmpxchgWill Deacon2015-07-271-43/+0
* arm64: cmpxchg: avoid "cc" clobber in ll/sc routinesWill Deacon2015-07-271-2/+2
* arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPUWill Deacon2015-07-271-0/+43
* arm64: cmpxchg: patch in lse instructions when supported by the CPUWill Deacon2015-07-271-0/+39
* arm64: atomics: patch in lse instructions when supported by the CPUWill Deacon2015-07-271-109/+291
* arm64: introduce CONFIG_ARM64_LSE_ATOMICS as fallback to ll/sc atomicsWill Deacon2015-07-271-0/+170