summaryrefslogtreecommitdiffstats
path: root/drivers
diff options
context:
space:
mode:
authorMateusz Guzik <mjguzik@gmail.com>2024-05-28 22:42:57 +0200
committerAndrew Morton <akpm@linux-foundation.org>2024-06-24 22:25:03 -0700
commit51d821654be4286b005ad2b7dc8b973d5008a2ec (patch)
treefcb7bf5cea0db77b0ac335acd4455ee41ee74da3 /drivers
parent727759d748ed34cc8d3e1d215fbc1766010dee3d (diff)
downloadlinux-stable-51d821654be4286b005ad2b7dc8b973d5008a2ec.tar.gz
linux-stable-51d821654be4286b005ad2b7dc8b973d5008a2ec.tar.bz2
linux-stable-51d821654be4286b005ad2b7dc8b973d5008a2ec.zip
percpu_counter: add a cmpxchg-based _add_batch variant
Interrupt disable/enable trips are quite expensive on x86-64 compared to a mere cmpxchg (note: no lock prefix!) and percpu counters are used quite often. With this change I get a bump of 1% ops/s for negative path lookups, plugged into will-it-scale: void testcase(unsigned long long *iterations, unsigned long nr) { while (1) { int fd = open("/tmp/nonexistent", O_RDONLY); assert(fd == -1); (*iterations)++; } } The win would be higher if it was not for other slowdowns, but one has to start somewhere. Link: https://lkml.kernel.org/r/20240528204257.434817-1-mjguzik@gmail.com Signed-off-by: Mateusz Guzik <mjguzik@gmail.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Dennis Zhou <dennis@kernel.org> Cc: Hugh Dickins <hughd@google.com> Cc: Tejun Heo <tj@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'drivers')
0 files changed, 0 insertions, 0 deletions