diff options
author | Guo Ren <guoren@linux.alibaba.com> | 2022-07-23 21:32:34 -0400 |
---|---|---|
committer | Guo Ren <guoren@linux.alibaba.com> | 2022-07-31 05:24:42 -0400 |
commit | 45e15c1a375ea380d55880be2f8182cb737b60ed (patch) | |
tree | d877ed3a2314bfe6ebfb2a6aa57ea9ef0ec4ec91 /arch/csky/include/asm/spinlock_types.h | |
parent | 4e8bb4ba5a558159ffbfa7e60322a1c151c3903c (diff) | |
download | linux-stable-45e15c1a375ea380d55880be2f8182cb737b60ed.tar.gz linux-stable-45e15c1a375ea380d55880be2f8182cb737b60ed.tar.bz2 linux-stable-45e15c1a375ea380d55880be2f8182cb737b60ed.zip |
csky: Add qspinlock support
Enable qspinlock by the requirements mentioned in a8ad07e5240c9
("asm-generic: qspinlock: Indicate the use of mixed-size atomics").
C-SKY only has "ldex/stex" for all atomic operations. So csky give a
strong forward guarantee for "ldex/stex." That means when ldex grabbed
the cache line into $L1, it would block other cores from snooping the
address with several cycles. The atomic_fetch_add & xchg16 has the same
forward guarantee level in C-SKY.
Qspinlock has better code size and performance in a fast path.
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Diffstat (limited to 'arch/csky/include/asm/spinlock_types.h')
-rw-r--r-- | arch/csky/include/asm/spinlock_types.h | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/arch/csky/include/asm/spinlock_types.h b/arch/csky/include/asm/spinlock_types.h new file mode 100644 index 000000000000..75bdf3af80ba --- /dev/null +++ b/arch/csky/include/asm/spinlock_types.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ASM_CSKY_SPINLOCK_TYPES_H +#define __ASM_CSKY_SPINLOCK_TYPES_H + +#include <asm-generic/qspinlock_types.h> +#include <asm-generic/qrwlock_types.h> + +#endif /* __ASM_CSKY_SPINLOCK_TYPES_H */ |