summaryrefslogtreecommitdiffstats
path: root/kernel/bpf/memalloc.c
Commit message (Expand)AuthorAgeFilesLines
* bpf: Remove unnecessary cpu == 0 check in memallocYonghong Song2024-01-041-1/+1
* bpf: Use smaller low/high marks for percpu allocationYonghong Song2024-01-031-1/+7
* bpf: Refill only one percpu element in memallocYonghong Song2024-01-031-4/+9
* bpf: Allow per unit prefill for non-fix-size percpu memory allocatorYonghong Song2024-01-031-1/+56
* bpf: Add objcg to bpf_mem_allocYonghong Song2024-01-031-5/+6
* bpf: Avoid unnecessary extra percpu memory allocationYonghong Song2024-01-031-1/+3
* bpf: Use c->unit_size to select target cache during freeHou Tao2023-12-201-94/+11
* bpf: Add missed allocation hint for bpf_mem_cache_alloc_flags()Hou Tao2023-11-261-0/+2
* bpf: Add more WARN_ON_ONCE checks for mismatched alloc and freeHou Tao2023-10-261-0/+4
* bpf: Use pcpu_alloc_size() in bpf_mem_free{_rcu}()Hou Tao2023-10-201-2/+14
* bpf: Re-enable unit_size checking for global per-cpu allocatorHou Tao2023-10-201-10/+12
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski2023-10-051-25/+19
|\
| * bpf: Use kmalloc_size_roundup() to adjust size_indexHou Tao2023-09-301-25/+19
* | Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netPaolo Abeni2023-09-211-4/+90
|\|
| * bpf: Skip unit_size checking for global per-cpu allocatorHou Tao2023-09-151-0/+7
| * bpf: Ensure unit_size is matched with slab cache object sizeHou Tao2023-09-111-2/+31
| * bpf: Don't prefill for unused bpf_mem_cacheHou Tao2023-09-111-2/+14
| * bpf: Adjust size_index according to the value of KMALLOC_MIN_SIZEHou Tao2023-09-111-0/+38
* | bpf: Enable IRQ after irq_work_raise() completes in unit_free{_rcu}()Hou Tao2023-09-081-2/+7
* | bpf: Enable IRQ after irq_work_raise() completes in unit_alloc()Hou Tao2023-09-081-1/+6
* | bpf: Add support for non-fix-size percpu mem allocationYonghong Song2023-09-081-8/+6
|/
* bpf: Non-atomically allocate freelist during prefillYiFei Zhu2023-07-281-4/+8
* bpf: work around -Wuninitialized warningArnd Bergmann2023-07-251-6/+6
* bpf: Add object leak check.Hou Tao2023-07-121-0/+35
* bpf: Introduce bpf_mem_free_rcu() similar to kfree_rcu().Alexei Starovoitov2023-07-121-3/+126
* bpf: Allow reuse from waiting_for_gp_ttrace list.Alexei Starovoitov2023-07-121-6/+10
* bpf: Add a hint to allocated objects.Alexei Starovoitov2023-07-121-19/+31
* bpf: Change bpf_mem_cache draining process.Alexei Starovoitov2023-07-121-9/+9
* bpf: Further refactor alloc_bulk().Alexei Starovoitov2023-07-121-12/+18
* bpf: Factor out inc/dec of active flag into helpers.Alexei Starovoitov2023-07-121-12/+18
* bpf: Refactor alloc_bulk().Alexei Starovoitov2023-07-121-20/+26
* bpf: Let free_all() return the number of freed elements.Alexei Starovoitov2023-07-121-2/+6
* bpf: Simplify code of destroy_mem_alloc() with kmemdup().Alexei Starovoitov2023-07-121-5/+2
* bpf: Rename few bpf_mem_alloc fields.Alexei Starovoitov2023-07-121-28/+29
* bpf: Factor out a common helper free_all()Hou Tao2023-06-061-15/+16
* bpf: Add a few bpf mem allocator functionsMartin KaFai Lau2023-03-251-9/+50
* bpf: Zeroing allocated object from slab in bpf memory allocatorHou Tao2023-02-151-1/+1
* bpf: allow to disable bpf map memory accountingYafang Shao2023-02-101-1/+2
* bpf: Fix off-by-one error in bpf_mem_cache_idx()Hou Tao2023-01-181-1/+1
* bpf: Skip rcu_barrier() if rcu_trace_implies_rcu_gp() is trueHou Tao2022-12-081-1/+9
* bpf: Reuse freed element in free_by_rcu during allocationHou Tao2022-12-081-3/+18
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski2022-10-241-2/+16
|\
| * bpf: Use __llist_del_all() whenever possbile during memory drainingHou Tao2022-10-211-2/+5
| * bpf: Wait for busy refill_work when destroying bpf memory allocatorHou Tao2022-10-211-0/+11
* | bpf: Use rcu_trace_implies_rcu_gp() in bpf memory allocatorHou Tao2022-10-181-5/+10
|/
* bpf: Check whether or not node is NULL before free it in free_bulkHou Tao2022-09-201-1/+2
* bpf: Replace __ksize with ksize.Alexei Starovoitov2022-09-061-1/+1
* bpf: Optimize rcu_barrier usage between hash map and bpf_mem_alloc.Alexei Starovoitov2022-09-051-16/+64
* bpf: Remove usage of kmem_cache from bpf_mem_cache.Alexei Starovoitov2022-09-051-36/+14
* bpf: Prepare bpf_mem_alloc to be used by sleepable bpf programs.Alexei Starovoitov2022-09-051-1/+14