summaryrefslogtreecommitdiffstats
path: root/kernel/bpf/hashtab.c
Commit message (Expand)AuthorAgeFilesLines
* bpf: Zeroing allocated object from slab in bpf memory allocatorHou Tao2023-02-151-2/+2
* bpf: hash map, avoid deadlock with suitable hash maskTonghao Zhang2023-01-121-2/+2
* bpf: Do btf_record_free outside map_free callbackKumar Kartikeya Dwivedi2022-11-171-1/+0
* bpf: Consolidate spin_lock, timer management into btf_recordKumar Kartikeya Dwivedi2022-11-031-16/+8
* bpf: Refactor kptr_off_tab into btf_recordKumar Kartikeya Dwivedi2022-11-031-8/+6
* treewide: use get_random_u32() when possibleJason A. Donenfeld2022-10-111-1/+1
* bpf: Always use raw spinlock for hash bucket lockHou Tao2022-09-211-52/+14
* bpf: add missing percpu_counter_destroy() in htab_map_alloc()Tetsuo Handa2022-09-101-0/+2
* bpf: Optimize rcu_barrier usage between hash map and bpf_mem_alloc.Alexei Starovoitov2022-09-051-3/+3
* bpf: Convert percpu hash map to per-cpu bpf_mem_alloc.Alexei Starovoitov2022-09-051-26/+19
* bpf: Add percpu allocation support to bpf_mem_alloc.Alexei Starovoitov2022-09-051-1/+1
* bpf: Optimize call_rcu in non-preallocated hash map.Alexei Starovoitov2022-09-051-2/+6
* bpf: Optimize element count in non-preallocated hash map.Alexei Starovoitov2022-09-051-8/+62
* bpf: Convert hash map to bpf_mem_alloc.Alexei Starovoitov2022-09-051-5/+16
* bpf: Propagate error from htab_lock_bucket() to userspaceHou Tao2022-08-311-2/+5
* bpf: Disable preemption when increasing per-cpu map_lockedHou Tao2022-08-311-5/+18
* Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski2022-08-171-3/+3
|\
| * bpf: Use bpf_map_area_alloc consistently on bpf map creationYafang Shao2022-08-101-3/+3
| * bpf: Make __GFP_NOWARN consistent in bpf map creationYafang Shao2022-08-101-1/+1
* | bpf: Acquire map uref in .init_seq_private for hash map iteratorHou Tao2022-08-101-0/+2
* | bpf: Don't reinit map value in prealloc_lru_popKumar Kartikeya Dwivedi2022-08-091-5/+1
|/
* bpf: Make non-preallocated allocation low priorityYafang Shao2022-07-121-3/+3
* bpf: add bpf_map_lookup_percpu_elem for percpu mapFeng Zhou2022-05-111-0/+32
* bpf: Extend batch operations for map-in-map bpf-mapsTakshak Chahande2022-05-101-2/+11
* bpf: Compute map_btf_id during build timeMenglong Dong2022-04-261-15/+7
* bpf: Wire up freeing of referenced kptrKumar Kartikeya Dwivedi2022-04-251-16/+48
* bpf: Remove unnecessary type castingsYu Zhe2022-04-141-1/+1
* bpf: Cleanup commentsTom Rix2022-02-231-1/+1
* bpf: Replace callers of BPF_CAST_CALL with proper function typedefKees Cook2021-09-281-4/+3
* bpf: Replace "want address" users of BPF_CAST_CALL with BPF_CALL_IMMKees Cook2021-09-281-3/+3
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski2021-08-131-2/+2
|\
| * bpf: Fix integer overflow involving bucket_sizeTatsuhiko Yasumatsu2021-08-071-2/+2
* | bpf: Add map side support for bpf timers.Alexei Starovoitov2021-07-151-12/+93
|/
* bpf: Allow RCU-protected lookups to happen from bh contextToke Høiland-Jørgensen2021-06-241-7/+14
* bpf: Fix spelling mistakesZhen Lei2021-05-241-2/+2
* bpf: Add lookup_and_delete_elem support to hashtabDenis Salopek2021-05-241-0/+98
* kernel/bpf/: Fix misspellings using codespell toolLiu xuzhi2021-03-161-1/+1
* bpf: Add hashtab support for bpf_for_each_map_elem() helperYonghong Song2021-02-261-0/+65
* bpf: Allows per-cpu maps and map-in-map in sleepable programsAlexei Starovoitov2021-02-111-2/+2
* bpf: Add schedule point in htab_init_buckets()Eric Dumazet2020-12-221-0/+1
* bpf: Propagate __user annotations properlyLukas Bulwahn2020-12-071-1/+1
* bpf: Avoid overflows involving hash elem_sizeEric Dumazet2020-12-071-2/+2
* bpf: Eliminate rlimit-based memory accounting for hashtab mapsRoman Gushchin2020-12-021-18/+1
* bpf: Refine memcg-based memory accounting for hashtab mapsRoman Gushchin2020-12-021-10/+14
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski2020-11-141-45/+99
|\
| * bpf: Lift hashtab key_size limitFlorian Lehner2020-11-051-11/+5
| * bpf: Fix error path in htab_map_alloc()Eric Dumazet2020-11-021-2/+4
| * bpf: Avoid hashtab deadlock with map_lockedSong Liu2020-10-301-32/+82
| * bpf: Use separate lockdep class for each hashtabSong Liu2020-10-301-2/+10
* | bpf: Zero-fill re-used per-cpu map elementDavid Verbeiren2020-11-051-2/+28
|/