summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/Kconfig
diff options
context:
space:
mode:
authorNicholas Piggin <npiggin@gmail.com>2020-07-24 23:14:20 +1000
committerMichael Ellerman <mpe@ellerman.id.au>2020-07-27 00:01:23 +1000
commitaa65ff6b18e0366db1790609956a4ac7308c5668 (patch)
tree98fe1ff898cadd1de9508cb6213750838b641289 /arch/powerpc/Kconfig
parent12d0b9d6c843e7dbe739ebefcf16c7e4a45e4e78 (diff)
downloadlinux-aa65ff6b18e0366db1790609956a4ac7308c5668.tar.gz
linux-aa65ff6b18e0366db1790609956a4ac7308c5668.tar.bz2
linux-aa65ff6b18e0366db1790609956a4ac7308c5668.zip
powerpc/64s: Implement queued spinlocks and rwlocks
These have shown significantly improved performance and fairness when spinlock contention is moderate to high on very large systems. With this series including subsequent patches, on a 16 socket 1536 thread POWER9, a stress test such as same-file open/close from all CPUs gets big speedups, 11620op/s aggregate with simple spinlocks vs 384158op/s (33x faster), where the difference in throughput between the fastest and slowest thread goes from 7x to 1.4x. Thanks to the fast path being identical in terms of atomics and barriers (after a subsequent optimisation patch), single threaded performance is not changed (no measurable difference). On smaller systems, performance and fairness seems to be generally improved. Using dbench on tmpfs as a test (that starts to run into kernel spinlock contention), a 2-socket OpenPOWER POWER9 system was tested with bare metal and KVM guest configurations. Results can be found here: https://github.com/linuxppc/issues/issues/305#issuecomment-663487453 Observations are: - Queued spinlocks are equal when contention is insignificant, as expected and as measured with microbenchmarks. - When there is contention, on bare metal queued spinlocks have better throughput and max latency at all points. - When virtualised, queued spinlocks are slightly worse approaching peak throughput, but significantly better throughput and max latency at all points beyond peak, until queued spinlock maximum latency rises when clients are 2x vCPUs. The regressions haven't been analysed very well yet, there are a lot of things that can be tuned, particularly the paravirtualised locking, but the numbers already look like a good net win even on relatively small systems. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Waiman Long <longman@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131423.1362108-4-npiggin@gmail.com
Diffstat (limited to 'arch/powerpc/Kconfig')
-rw-r--r--arch/powerpc/Kconfig15
1 files changed, 15 insertions, 0 deletions
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 81c0dee1cbff..a751edacf4bc 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -146,6 +146,8 @@ config PPC
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF if PPC64
+ select ARCH_USE_QUEUED_RWLOCKS if PPC_QUEUED_SPINLOCKS
+ select ARCH_USE_QUEUED_SPINLOCKS if PPC_QUEUED_SPINLOCKS
select ARCH_WANT_IPC_PARSE_VERSION
select ARCH_WEAK_RELEASE_ACQUIRE
select BINFMT_ELF
@@ -491,6 +493,19 @@ config HOTPLUG_CPU
Say N if you are unsure.
+config PPC_QUEUED_SPINLOCKS
+ bool "Queued spinlocks"
+ depends on SMP
+ help
+ Say Y here to use queued spinlocks which give better scalability and
+ fairness on large SMP and NUMA systems without harming single threaded
+ performance.
+
+ This option is currently experimental, the code is more complex and
+ less tested so it defaults to "N" for the moment.
+
+ If unsure, say "N".
+
config ARCH_CPU_PROBE_RELEASE
def_bool y
depends on HOTPLUG_CPU