summaryrefslogtreecommitdiffstats
path: root/include/linux/bpf.h
diff options
context:
space:
mode:
authorKui-Feng Lee <thinker.li@gmail.com>2024-02-24 14:34:17 -0800
committerMartin KaFai Lau <martin.lau@kernel.org>2024-03-04 14:09:20 -0800
commit187e2af05abe6bf80581490239c449456627d17a (patch)
tree573c922b888bd957230d958b50a0756ce457262b /include/linux/bpf.h
parent73e4f9e615d7b99f39663d4722dc73e8fa5db5f9 (diff)
downloadlinux-stable-187e2af05abe6bf80581490239c449456627d17a.tar.gz
linux-stable-187e2af05abe6bf80581490239c449456627d17a.tar.bz2
linux-stable-187e2af05abe6bf80581490239c449456627d17a.zip
bpf: struct_ops supports more than one page for trampolines.
The BPF struct_ops previously only allowed one page of trampolines. Each function pointer of a struct_ops is implemented by a struct_ops bpf program. Each struct_ops bpf program requires a trampoline. The following selftest patch shows each page can hold a little more than 20 trampolines. While one page is more than enough for the tcp-cc usecase, the sched_ext use case shows that one page is not always enough and hits the one page limit. This patch overcomes the one page limit by allocating another page when needed and it is limited to a total of MAX_IMAGE_PAGES (8) pages which is more than enough for reasonable usages. The variable st_map->image has been changed to st_map->image_pages, and its type has been changed to an array of pointers to pages. Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240224223418.526631-3-thinker.li@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Diffstat (limited to 'include/linux/bpf.h')
-rw-r--r--include/linux/bpf.h4
1 files changed, 3 insertions, 1 deletions
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 814dc913a968..785660810e6a 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1763,7 +1763,9 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
struct bpf_tramp_link *link,
const struct btf_func_model *model,
void *stub_func,
- void *image, void *image_end);
+ void **image, u32 *image_off,
+ bool allow_alloc);
+void bpf_struct_ops_image_free(void *image);
static inline bool bpf_try_module_get(const void *data, struct module *owner)
{
if (owner == BPF_MODULE_OWNER)