summaryrefslogtreecommitdiffstats
path: root/net/xdp/xsk_queue.h
diff options
context:
space:
mode:
authorBrendan Jackman <jackmanb@google.com>2021-04-29 13:05:10 +0000
committerAndrii Nakryiko <andrii@kernel.org>2021-05-03 09:54:12 -0700
commit2a30f9440640c418bcfbea9b2b344d268b58e0a2 (patch)
treec49a61cb379ced5ebd365645b999382e1399779a /net/xdp/xsk_queue.h
parent801c6058d14a82179a7ee17a4b532cac6fad067f (diff)
downloadlinux-2a30f9440640c418bcfbea9b2b344d268b58e0a2.tar.gz
linux-2a30f9440640c418bcfbea9b2b344d268b58e0a2.tar.bz2
linux-2a30f9440640c418bcfbea9b2b344d268b58e0a2.zip
libbpf: Fix signed overflow in ringbuf_process_ring
One of our benchmarks running in (Google-internal) CI pushes data through the ringbuf faster htan than userspace is able to consume it. In this case it seems we're actually able to get >INT_MAX entries in a single ring_buffer__consume() call. ASAN detected that cnt overflows in this case. Fix by using 64-bit counter internally and then capping the result to INT_MAX before converting to the int return type. Do the same for the ring_buffer__poll(). Fixes: bf99c936f947 (libbpf: Add BPF ring buffer support) Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210429130510.1621665-1-jackmanb@google.com
Diffstat (limited to 'net/xdp/xsk_queue.h')
0 files changed, 0 insertions, 0 deletions