summaryrefslogtreecommitdiffstats
path: root/kernel/bpf
diff options
context:
space:
mode:
authorDaniel Borkmann <daniel@iogearbox.net>2016-11-04 00:01:19 +0100
committerDavid S. Miller <davem@davemloft.net>2016-11-07 13:20:52 -0500
commit483bed2b0ddd12ec33fc9407e0c6e1088e77a97c (patch)
treeaa01c5eb2cc793ea5e3629ccf59c5977c28c0264 /kernel/bpf
parent7233bc84a3aeda835d334499dc00448373caf5c0 (diff)
downloadlinux-stable-483bed2b0ddd12ec33fc9407e0c6e1088e77a97c.tar.gz
linux-stable-483bed2b0ddd12ec33fc9407e0c6e1088e77a97c.tar.bz2
linux-stable-483bed2b0ddd12ec33fc9407e0c6e1088e77a97c.zip
bpf: fix htab map destruction when extra reserve is in use
Commit a6ed3ea65d98 ("bpf: restore behavior of bpf_map_update_elem") added an extra per-cpu reserve to the hash table map to restore old behaviour from pre prealloc times. When non-prealloc is in use for a map, then problem is that once a hash table extra element has been linked into the hash-table, and the hash table is destroyed due to refcount dropping to zero, then htab_map_free() -> delete_all_elements() will walk the whole hash table and drop all elements via htab_elem_free(). The problem is that the element from the extra reserve is first fed to the wrong backend allocator and eventually freed twice. Fixes: a6ed3ea65d98 ("bpf: restore behavior of bpf_map_update_elem") Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'kernel/bpf')
-rw-r--r--kernel/bpf/hashtab.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 570eeca7bdfa..ad1bc67aff1b 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -687,7 +687,8 @@ static void delete_all_elements(struct bpf_htab *htab)
hlist_for_each_entry_safe(l, n, head, hash_node) {
hlist_del_rcu(&l->hash_node);
- htab_elem_free(htab, l);
+ if (l->state != HTAB_EXTRA_ELEM_USED)
+ htab_elem_free(htab, l);
}
}
}