diff options
author | Alexei Starovoitov <ast@fb.com> | 2017-03-07 20:00:13 -0800 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2017-03-09 13:27:17 -0800 |
commit | 4fe8435909fddc97b81472026aa954e06dd192a5 (patch) | |
tree | 31f7007c13a16573229090569e6fc3ca5c496a50 /kernel/audit.h | |
parent | 9f691549f76d488a0c74397b3e51e943865ea01f (diff) | |
download | linux-4fe8435909fddc97b81472026aa954e06dd192a5.tar.gz linux-4fe8435909fddc97b81472026aa954e06dd192a5.tar.bz2 linux-4fe8435909fddc97b81472026aa954e06dd192a5.zip |
bpf: convert htab map to hlist_nulls
when all map elements are pre-allocated one cpu can delete and reuse htab_elem
while another cpu is still walking the hlist. In such case the lookup may
miss the element. Convert hlist to hlist_nulls to avoid such scenario.
When bucket lock is taken there is no need to take such precautions,
so only convert map_lookup and map_get_next to nulls.
The race window is extremely small and only reproducible with explicit
udelay() inside lookup_nulls_elem_raw()
Similar to hlist add hlist_nulls_for_each_entry_safe() and
hlist_nulls_entry_safe() helpers.
Fixes: 6c9059817432 ("bpf: pre-allocate hash map elements")
Reported-by: Jonathan Perry <jonperry@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'kernel/audit.h')
0 files changed, 0 insertions, 0 deletions