summaryrefslogtreecommitdiffstats
path: root/tools/lib/bpf/bpf.c
Commit message (Collapse)AuthorAgeFilesLines
* libbpf: Fix lookup_and_delete_elem_flags error reportingMehrdad Arshad Rad2021-11-181-1/+3
| | | | | | | | | | | | | | [ Upstream commit 64165ddf8ea184631c65e3bbc8d59f6d940590ca ] Fix bpf_map_lookup_and_delete_elem_flags() to pass the return code through libbpf_err_errno() as we do similarly in bpf_map_lookup_and_delete_elem(). Fixes: f12b65432728 ("libbpf: Streamline error reporting for low-level APIs") Signed-off-by: Mehrdad Arshad Rad <arshad.rad@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20211104171354.11072-1-arshad.rad@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
* libbpf: Add bpf_cookie support to bpf_link_create() APIAndrii Nakryiko2021-08-171-7/+25
| | | | | | | | | | | | | | Add ability to specify bpf_cookie value when creating BPF perf link with bpf_link_create() low-level API. Given BPF_LINK_CREATE command is growing and keeps getting new fields that are specific to the type of BPF_LINK, extend libbpf side of bpf_link_create() API and corresponding OPTS struct to accomodate such changes. Add extra checks to prevent using incompatible/unexpected combinations of fields. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210815070609.987780-11-andrii@kernel.org
* libbpf: Streamline error reporting for low-level APIsAndrii Nakryiko2021-05-251-50/+118
| | | | | | | | | | | | | | | | | | | Ensure that low-level APIs behave uniformly across the libbpf as follows: - in case of an error, errno is always set to the correct error code; - when libbpf 1.0 mode is enabled with LIBBPF_STRICT_DIRECT_ERRS option to libbpf_set_strict_mode(), return -Exxx error value directly, instead of -1; - by default, until libbpf 1.0 is released, keep returning -1 directly. More context, justification, and discussion can be found in "Libbpf: the road to v1.0" document ([0]). [0] https://docs.google.com/document/d/1UyjTZuPFWiPFyKk1tV5an11_iaRuec6U-ZESZ54nNTY Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210525035935.1461796-4-andrii@kernel.org
* bpf: Extend libbpf with bpf_map_lookup_and_delete_elem_flagsDenis Salopek2021-05-241-0/+13
| | | | | | | | | | Add bpf_map_lookup_and_delete_elem_flags() libbpf API in order to use the BPF_F_LOCK flag with the map_lookup_and_delete_elem() function. Signed-off-by: Denis Salopek <denis.salopek@sartura.hr> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/15b05dafe46c7e0750d110f233977372029d1f62.1620763117.git.denis.salopek@sartura.hr
* libbpf: Support attachment of BPF tracing programs to kernel modulesAndrii Nakryiko2020-12-031-1/+4
| | | | | | | | | | Teach libbpf to search for BTF types in kernel modules for tracing BPF programs. This allows attachment of raw_tp/fentry/fexit/fmod_ret/etc BPF program types to tracepoints and functions in kernel modules. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20201203204634.1325171-13-andrii@kernel.org
* libbpf: Factor out low-level BPF program loading helperAndrii Nakryiko2020-12-031-32/+68
| | | | | | | | | | Refactor low-level API for BPF program loading to not rely on public API types. This allows painless extension without constant efforts to cleverly not break backwards compatibility. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20201203204634.1325171-12-andrii@kernel.org
* libbpf: Cap retries in sys_bpf_prog_loadStanislav Fomichev2020-12-031-1/+2
| | | | | | | | | | | | | | | | | | | I've seen a situation, where a process that's under pprof constantly generates SIGPROF which prevents program loading indefinitely. The right thing to do probably is to disable signals in the upper layers while loading, but it still would be nice to get some error from libbpf instead of an endless loop. Let's add some small retry limit to the program loading: try loading the program 5 (arbitrary) times and give up. v2: * 10 -> 5 retires (Andrii Nakryiko) Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201202231332.3923644-1-sdf@google.com
* libbpf: Add support for freplace attachment in bpf_link_createToke Høiland-Jørgensen2020-09-291-3/+15
| | | | | | | | | | | This adds support for supplying a target btf ID for the bpf_link_create() operation, and adds a new bpf_program__attach_freplace() high-level API for attaching freplace functions with a target. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/160138355387.48470.18026176785351166890.stgit@toke.dk
* libbpf: Remove assumption of single contiguous memory for BTF dataAndrii Nakryiko2020-09-281-1/+1
| | | | | | | | | | | | | | Refactor internals of struct btf to remove assumptions that BTF header, type data, and string data are layed out contiguously in a memory in a single memory allocation. Now we have three separate pointers pointing to the start of each respective are: header, types, strings. In the next patches, these pointers will be re-assigned to point to independently allocated memory areas, if BTF needs to be modified. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200926011357.2366158-3-andriin@fb.com
* libbpf: Support test run of raw tracepoint programsSong Liu2020-09-281-0/+31
| | | | | | | | | | | Add bpf_prog_test_run_opts() with support of new fields in bpf_attr.test, namely, flags and cpu. Also extend _opts operations to support outputs via opts. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200925205432.1777-3-songliubraving@fb.com
* libbpf: Add BPF_PROG_BIND_MAP syscall and use it on .rodata sectionYiFei Zhu2020-09-151-0/+16
| | | | | | | | | | | | | The patch adds a simple wrapper bpf_prog_bind_map around the syscall. When the libbpf tries to load a program, it will probe the kernel for the support of this syscall and unconditionally bind .rodata section to the program. Signed-off-by: YiFei Zhu <zhuyifei@google.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Cc: YiFei Zhu <zhuyifei1999@gmail.com> Link: https://lore.kernel.org/bpf/20200915234543.3220146-4-sdf@google.com
* libbpf: Centralize poisoning and poison reallocarray()Andrii Nakryiko2020-08-181-3/+0
| | | | | | | | | | | | Most of libbpf source files already include libbpf_internal.h, so it's a good place to centralize identifier poisoning. So move kernel integer type poisoning there. And also add reallocarray to a poison list to prevent accidental use of it. libbpf_reallocarray() should be used universally instead. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819013607.3607269-4-andriin@fb.com
* tools/bpf: Support new uapi for map element bpf iteratorYonghong Song2020-08-061-0/+3
| | | | | | | | | | | | | Previous commit adjusted kernel uapi for map element bpf iterator. This patch adjusted libbpf API due to uapi change. bpftool and bpf_iter selftests are also changed accordingly. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200805055058.1457623-1-yhs@fb.com
* libbpf: Add bpf_link detach APIsAndrii Nakryiko2020-08-011-0/+10
| | | | | | | | | | | Add low-level bpf_link_detach() API. Also add higher-level bpf_link__detach() one. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200731182830.286260-3-andriin@fb.com
* tools/libbpf: Add support for bpf map element iteratorYonghong Song2020-07-251-0/+1
| | | | | | | | | | | | Add map_fd to bpf_iter_attach_opts and flags to bpf_link_create_opts. Later on, bpftool or selftest will be able to create a bpf map element iterator by passing map_fd to the kernel during link creation time. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200723184117.590673-1-yhs@fb.com
* tools/libbpf: Add bpf_iter supportYonghong Song2020-05-091-0/+10
| | | | | | | | | | | | | | | | | | | | | Two new libbpf APIs are added to support bpf_iter: - bpf_program__attach_iter Given a bpf program and additional parameters, which is none now, returns a bpf_link. - bpf_iter_create syscall level API to create a bpf iterator. The macro BPF_SEQ_PRINTF are also introduced. The format looks like: BPF_SEQ_PRINTF(seq, "task id %d\n", pid); This macro can help bpf program writers with nicer bpf_seq_printf syntax similar to the kernel one. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200509175917.2476936-1-yhs@fb.com
* libbpf: Add support for command BPF_ENABLE_STATSSong Liu2020-05-011-0/+10
| | | | | | | | bpf_enable_stats() is added to enable given stats. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200430071506.1408910-3-songliubraving@fb.com
* libbpf: Add low-level APIs for new bpf_link commandsAndrii Nakryiko2020-04-281-2/+17
| | | | | | | | | Add low-level API calls for bpf_link_get_next_id() and bpf_link_get_fd_by_id(). Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200429001614.1544-6-andriin@fb.com
* libbpf: Add support for bpf_link-based cgroup attachmentAndrii Nakryiko2020-03-301-0/+34
| | | | | | | | | | | | | | | | | | Add bpf_program__attach_cgroup(), which uses BPF_LINK_CREATE subcommand to create an FD-based kernel bpf_link. Also add low-level bpf_link_create() API. If expected_attach_type is not specified explicitly with bpf_program__set_expected_attach_type(), libbpf will try to determine proper attach type from BPF program's section definition. Also add support for bpf_link's underlying BPF program replacement: - unconditional through high-level bpf_link__update_program() API; - cmpxchg-like with specifying expected current BPF program through low-level bpf_link_update() API. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200330030001.2312810-4-andriin@fb.com
* tools/libbpf: Add support for BPF_PROG_TYPE_LSMKP Singh2020-03-301-1/+2
| | | | | | | | | | | | | | | | | | Since BPF_PROG_TYPE_LSM uses the same attaching mechanism as BPF_PROG_TYPE_TRACING, the common logic is refactored into a static function bpf_program__attach_btf_id. A new API call bpf_program__attach_lsm is still added to avoid userspace conflicts if this ever changes in the future. Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Brendan Jackman <jackmanb@google.com> Reviewed-by: Florent Revest <revest@google.com> Reviewed-by: James Morris <jamorris@linux.microsoft.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200329004356.27286-7-kpsingh@chromium.org
* libbpf: Add support for program extensionsAlexei Starovoitov2020-01-221-1/+2
| | | | | | | | | | | | | | | | | Add minimal support for program extensions. bpf_object_open_opts() needs to be called with attach_prog_fd = target_prog_fd and BPF program extension needs to have in .c file section definition like SEC("freplace/func_to_be_replaced"). libbpf will search for "func_to_be_replaced" in the target_prog_fd's BTF and will pass it in attach_btf_id to the kernel. This approach works for tests, but more compex use case may need to request function name (and attach_btf_id that kernel sees) to be more dynamic. Such API will be added in future patches. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20200121005348.2769920-3-ast@kernel.org
* libbpf: Fix unneeded extra initialization in bpf_map_batch_commonBrian Vazquez2020-01-161-1/+1
| | | | | | | | | | | bpf_attr doesn't required to be declared with '= {}' as memset is used in the code. Fixes: 2ab3d86ea1859 ("libbpf: Add libbpf support to batch ops") Reported-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Brian Vazquez <brianvv@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200116045918.75597-1-brianvv@google.com
* libbpf: Add libbpf support to batch opsYonghong Song2020-01-151-0/+58
| | | | | | | | | | | | Added four libbpf API functions to support map batch operations: . int bpf_map_delete_batch( ... ) . int bpf_map_lookup_batch( ... ) . int bpf_map_lookup_and_delete_batch( ... ) . int bpf_map_update_batch( ... ) Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200115184308.162644-8-brianvv@google.com
* libbpf: Poison kernel-only integer typesAndrii Nakryiko2020-01-101-0/+3
| | | | | | | | | | | | | | | | | | It's been a recurring issue with types like u32 slipping into libbpf source code accidentally. This is not detected during builds inside kernel source tree, but becomes a compilation error in libbpf's Github repo. Libbpf is supposed to use only __{s,u}{8,16,32,64} typedefs, so poison {s,u}{8,16,32,64} explicitly in every .c file. Doing that in a bit more centralized way, e.g., inside libbpf_internal.h breaks selftests, which are both using kernel u32 and libbpf_internal.h. This patch also fixes a new u32 occurence in libbpf.c, added recently. Fixes: 590a00888250 ("bpf: libbpf: Add STRUCT_OPS support") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200110181916.271446-1-andriin@fb.com
* bpf: libbpf: Add STRUCT_OPS supportMartin KaFai Lau2020-01-091-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds BPF STRUCT_OPS support to libbpf. The only sec_name convention is SEC(".struct_ops") to identify the struct_ops implemented in BPF, e.g. To implement a tcp_congestion_ops: SEC(".struct_ops") struct tcp_congestion_ops dctcp = { .init = (void *)dctcp_init, /* <-- a bpf_prog */ /* ... some more func prts ... */ .name = "bpf_dctcp", }; Each struct_ops is defined as a global variable under SEC(".struct_ops") as above. libbpf creates a map for each variable and the variable name is the map's name. Multiple struct_ops is supported under SEC(".struct_ops"). In the bpf_object__open phase, libbpf will look for the SEC(".struct_ops") section and find out what is the btf-type the struct_ops is implementing. Note that the btf-type here is referring to a type in the bpf_prog.o's btf. A "struct bpf_map" is added by bpf_object__add_map() as other maps do. It will then collect (through SHT_REL) where are the bpf progs that the func ptrs are referring to. No btf_vmlinux is needed in the open phase. In the bpf_object__load phase, the map-fields, which depend on the btf_vmlinux, are initialized (in bpf_map__init_kern_struct_ops()). It will also set the prog->type, prog->attach_btf_id, and prog->expected_attach_type. Thus, the prog's properties do not rely on its section name. [ Currently, the bpf_prog's btf-type ==> btf_vmlinux's btf-type matching process is as simple as: member-name match + btf-kind match + size match. If these matching conditions fail, libbpf will reject. The current targeting support is "struct tcp_congestion_ops" which most of its members are function pointers. The member ordering of the bpf_prog's btf-type can be different from the btf_vmlinux's btf-type. ] Then, all obj->maps are created as usual (in bpf_object__create_maps()). Once the maps are created and prog's properties are all set, the libbpf will proceed to load all the progs. bpf_map__attach_struct_ops() is added to register a struct_ops map to a kernel subsystem. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200109003514.3856730-1-kafai@fb.com
* libbpf: Introduce bpf_prog_attach_xattrAndrey Ignatov2019-12-191-1/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce a new bpf_prog_attach_xattr function that, in addition to program fd, target fd and attach type, accepts an extendable struct bpf_prog_attach_opts. bpf_prog_attach_opts relies on DECLARE_LIBBPF_OPTS macro to maintain backward and forward compatibility and has the following "optional" attach attributes: * existing attach_flags, since it's not required when attaching in NONE mode. Even though it's quite often used in MULTI and OVERRIDE mode it seems to be a good idea to reduce number of arguments to bpf_prog_attach_xattr; * newly introduced attribute of BPF_PROG_ATTACH command: replace_prog_fd that is fd of previously attached cgroup-bpf program to replace if BPF_F_REPLACE flag is used. The new function is named to be consistent with other xattr-functions (bpf_prog_test_run_xattr, bpf_create_map_xattr, bpf_load_program_xattr). The struct bpf_prog_attach_opts is supposed to be used with DECLARE_LIBBPF_OPTS macro. Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/bd6e0732303eb14e4b79cb128268d9e9ad6db208.1576741281.git.rdna@fb.com
* libbpf: Add support for attaching BPF programs to other BPF programsAlexei Starovoitov2019-11-151-3/+5
| | | | | | | | | Extend libbpf api to pass attach_prog_fd into bpf_object__open. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191114185720.1641606-19-ast@kernel.org
* libbpf: Fix potential overflow issueAndrii Nakryiko2019-11-071-1/+1
| | | | | | | | | | Fix a potential overflow issue found by LGTM analysis, based on Github libbpf source code. Fixes: 3d65014146c6 ("bpf: libbpf: Add btf_line_info support to libbpf") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-3-andriin@fb.com
* libbpf: Add support for prog_tracingAlexei Starovoitov2019-10-311-4/+4
| | | | | | | | | | | Cleanup libbpf from expected_attach_type == attach_btf_id hack and introduce BPF_PROG_TYPE_TRACING. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191030223212.953010-3-ast@kernel.org
* libbpf: Auto-detect btf_id of BTF-based raw_tracepointsAlexei Starovoitov2019-10-171-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | It's a responsiblity of bpf program author to annotate the program with SEC("tp_btf/name") where "name" is a valid raw tracepoint. The libbpf will try to find "name" in vmlinux BTF and error out in case vmlinux BTF is not available or "name" is not found. If "name" is indeed a valid raw tracepoint then in-kernel BTF will have "btf_trace_##name" typedef that points to function prototype of that raw tracepoint. BTF description captures exact argument the kernel C code is passing into raw tracepoint. The kernel verifier will check the types while loading bpf program. libbpf keeps BTF type id in expected_attach_type, but since kernel ignores this attribute for tracing programs copy it into attach_btf_id attribute before loading. Later the kernel will use prog->attach_btf_id to select raw tracepoint during bpf_raw_tracepoint_open syscall command. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191016032505.2089704-6-ast@kernel.org
* libbpf: add bpf_btf_get_next_id() to cycle through BTF objectsQuentin Monnet2019-08-201-0/+5
| | | | | | | | | | | | | Add an API function taking a BTF object id and providing the id of the next BTF object in the kernel. This can be used to list all BTF objects loaded on the system. v2: - Rebase on top of Andrii's changes regarding libbpf versioning. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* libbpf: refactor bpf_*_get_next_id() functionsQuentin Monnet2019-08-201-13/+8
| | | | | | | | | | In preparation for the introduction of a similar function for retrieving the id of the next BTF object, consolidate the code from bpf_prog_get_next_id() and bpf_map_get_next_id() in libbpf. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* libbpf: add common min/max macro to libbpf_internal.hAndrii Nakryiko2019-06-181-5/+2
| | | | | | | | | Multiple files in libbpf redefine their own definitions for min/max. Let's define them in libbpf_internal.h and use those everywhere. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* libbpf: add "prog_flags" to bpf_program/bpf_prog_load_attr/bpf_load_program_attrJiong Wang2019-05-241-0/+1
| | | | | | | | | | | | | libbpf doesn't allow passing "prog_flags" during bpf program load in a couple of load related APIs, "bpf_load_program_xattr", "load_program" and "bpf_prog_load_xattr". It makes sense to allow passing "prog_flags" which is useful for customizing program loading. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* tools/bpf: fix perf build error with uClibc (seen on ARC)Vineet Gupta2019-05-051-0/+2
| | | | | | | | | | | | | | | | | | | | | | When build perf for ARC recently, there was a build failure due to lack of __NR_bpf. | Auto-detecting system features: | | ... get_cpuid: [ OFF ] | ... bpf: [ on ] | | # error __NR_bpf not defined. libbpf does not support your arch. ^~~~~ | bpf.c: In function 'sys_bpf': | bpf.c:66:17: error: '__NR_bpf' undeclared (first use in this function) | return syscall(__NR_bpf, cmd, attr, size); | ^~~~~~~~ | sys_bpf Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* libbpf: add support for ctx_{size, }_{in, out} in BPF_PROG_TEST_RUNStanislav Fomichev2019-04-111-0/+5
| | | | | | | | | | Support recently introduced input/output context for test runs. We extend only bpf_prog_test_run_xattr. bpf_prog_test_run is unextendable and left as is. Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* bpf, bpftool: fix a few ubsan warningsYonghong Song2019-04-101-10/+9
| | | | | | | | | | | | | | | | | | The issue is reported at https://github.com/libbpf/libbpf/issues/28. Basically, per C standard, for void *memcpy(void *dest, const void *src, size_t n) if "dest" or "src" is NULL, regardless of whether "n" is 0 or not, the result of memcpy is undefined. clang ubsan reported three such instances in bpf.c with the following pattern: memcpy(dest, 0, 0). Although in practice, no known compiler will cause issues when copy size is 0. Let us still fix the issue to silence ubsan warnings. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* bpf, libbpf: support global data/bss/rodata sectionsDaniel Borkmann2019-04-091-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This work adds BPF loader support for global data sections to libbpf. This allows to write BPF programs in more natural C-like way by being able to define global variables and const data. Back at LPC 2018 [0] we presented a first prototype which implemented support for global data sections by extending BPF syscall where union bpf_attr would get additional memory/size pair for each section passed during prog load in order to later add this base address into the ldimm64 instruction along with the user provided offset when accessing a variable. Consensus from LPC was that for proper upstream support, it would be more desirable to use maps instead of bpf_attr extension as this would allow for introspection of these sections as well as potential live updates of their content. This work follows this path by taking the following steps from loader side: 1) In bpf_object__elf_collect() step we pick up ".data", ".rodata", and ".bss" section information. 2) If present, in bpf_object__init_internal_map() we add maps to the obj's map array that corresponds to each of the present sections. Given section size and access properties can differ, a single entry array map is created with value size that is corresponding to the ELF section size of .data, .bss or .rodata. These internal maps are integrated into the normal map handling of libbpf such that when user traverses all obj maps, they can be differentiated from user-created ones via bpf_map__is_internal(). In later steps when we actually create these maps in the kernel via bpf_object__create_maps(), then for .data and .rodata sections their content is copied into the map through bpf_map_update_elem(). For .bss this is not necessary since array map is already zero-initialized by default. Additionally, for .rodata the map is frozen as read-only after setup, such that neither from program nor syscall side writes would be possible. 3) In bpf_program__collect_reloc() step, we record the corresponding map, insn index, and relocation type for the global data. 4) And last but not least in the actual relocation step in bpf_program__relocate(), we mark the ldimm64 instruction with src_reg = BPF_PSEUDO_MAP_VALUE where in the first imm field the map's file descriptor is stored as similarly done as in BPF_PSEUDO_MAP_FD, and in the second imm field (as ldimm64 is 2-insn wide) we store the access offset into the section. Given these maps have only single element ldimm64's off remains zero in both parts. 5) On kernel side, this special marked BPF_PSEUDO_MAP_VALUE load will then store the actual target address in order to have a 'map-lookup'-free access. That is, the actual map value base address + offset. The destination register in the verifier will then be marked as PTR_TO_MAP_VALUE, containing the fixed offset as reg->off and backing BPF map as reg->map_ptr. Meaning, it's treated as any other normal map value from verification side, only with efficient, direct value access instead of actual call to map lookup helper as in the typical case. Currently, only support for static global variables has been added, and libbpf rejects non-static global variables from loading. This can be lifted until we have proper semantics for how BPF will treat multi-object BPF loads. From BTF side, libbpf will set the value type id of the types corresponding to the ".bss", ".data" and ".rodata" names which LLVM will emit without the object name prefix. The key type will be left as zero, thus making use of the key-less BTF option in array maps. Simple example dump of program using globals vars in each section: # bpftool prog [...] 6784: sched_cls name load_static_dat tag a7e1291567277844 gpl loaded_at 2019-03-11T15:39:34+0000 uid 0 xlated 1776B jited 993B memlock 4096B map_ids 2238,2237,2235,2236,2239,2240 # bpftool map show id 2237 2237: array name test_glo.bss flags 0x0 key 4B value 64B max_entries 1 memlock 4096B # bpftool map show id 2235 2235: array name test_glo.data flags 0x0 key 4B value 64B max_entries 1 memlock 4096B # bpftool map show id 2236 2236: array name test_glo.rodata flags 0x80 key 4B value 96B max_entries 1 memlock 4096B # bpftool prog dump xlated id 6784 int load_static_data(struct __sk_buff * skb): ; int load_static_data(struct __sk_buff *skb) 0: (b7) r6 = 0 ; test_reloc(number, 0, &num0); 1: (63) *(u32 *)(r10 -4) = r6 2: (bf) r2 = r10 ; int load_static_data(struct __sk_buff *skb) 3: (07) r2 += -4 ; test_reloc(number, 0, &num0); 4: (18) r1 = map[id:2238] 6: (18) r3 = map[id:2237][0]+0 <-- direct addr in .bss area 8: (b7) r4 = 0 9: (85) call array_map_update_elem#100464 10: (b7) r1 = 1 ; test_reloc(number, 1, &num1); [...] ; test_reloc(string, 2, str2); 120: (18) r8 = map[id:2237][0]+16 <-- same here at offset +16 122: (18) r1 = map[id:2239] 124: (18) r3 = map[id:2237][0]+16 126: (b7) r4 = 0 127: (85) call array_map_update_elem#100464 128: (b7) r1 = 120 ; str1[5] = 'x'; 129: (73) *(u8 *)(r9 +5) = r1 ; test_reloc(string, 3, str1); 130: (b7) r1 = 3 131: (63) *(u32 *)(r10 -4) = r1 132: (b7) r9 = 3 133: (bf) r2 = r10 ; int load_static_data(struct __sk_buff *skb) 134: (07) r2 += -4 ; test_reloc(string, 3, str1); 135: (18) r1 = map[id:2239] 137: (18) r3 = map[id:2235][0]+16 <-- direct addr in .data area 139: (b7) r4 = 0 140: (85) call array_map_update_elem#100464 141: (b7) r1 = 111 ; __builtin_memcpy(&str2[2], "hello", sizeof("hello")); 142: (73) *(u8 *)(r8 +6) = r1 <-- further access based on .bss data 143: (b7) r1 = 108 144: (73) *(u8 *)(r8 +5) = r1 [...] For Cilium use-case in particular, this enables migrating configuration constants from Cilium daemon's generated header defines into global data sections such that expensive runtime recompilations with LLVM can be avoided altogether. Instead, the ELF file becomes effectively a "template", meaning, it is compiled only once (!) and the Cilium daemon will then rewrite relevant configuration data from the ELF's .data or .rodata sections directly instead of recompiling the program. The updated ELF is then loaded into the kernel and atomically replaces the existing program in the networking datapath. More info in [0]. Based upon recent fix in LLVM, commit c0db6b6bd444 ("[BPF] Don't fail for static variables"). [0] LPC 2018, BPF track, "ELF relocation for static data in BPF", http://vger.kernel.org/lpc-bpf2018.html#session-3 Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* libbpf: teach libbpf about log_level bit 2Alexei Starovoitov2019-04-041-1/+1
| | | | | | | | | | | | | | | | | Allow bpf_prog_load_xattr() to specify log_level for program loading. Teach libbpf to accept log_level with bit 2 set. Increase default BPF_LOG_BUF_SIZE from 256k to 16M. There is no downside to increase it to a maximum allowed by old kernels. Existing 256k limit caused ENOSPC errors and users were not able to see verifier error which is printed at the end of the verifier log. If ENOSPC is hit, double the verifier log and try again to capture the verifier error. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* tools/bpf: replace bzero with memsetAndrii Nakryiko2019-02-141-24/+24
| | | | | | | | | bzero() call is deprecated and superseded by memset(). Signed-off-by: Andrii Nakryiko <andriin@fb.com> Reported-by: David Laight <david.laight@aculab.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* tools/bpf: add log_level to bpf_load_program_attrYonghong Song2019-02-071-5/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The kernel verifier has three levels of logs: 0: no logs 1: logs mostly useful > 1: verbose Current libbpf API functions bpf_load_program_xattr() and bpf_load_program() cannot specify log_level. The bcc, however, provides an interface for user to specify log_level 2 for verbose output. This patch added log_level into structure bpf_load_program_attr, so users, including bcc, can use bpf_load_program_xattr() to change log_level. The supported log_level is 0, 1, and 2. The bpf selftest test_sock.c is modified to enable log_level = 2. If the "verbose" in test_sock.c is changed to true, the test will output logs like below: $ ./test_sock func#0 @0 0: R1=ctx(id=0,off=0,imm=0) R10=fp0,call_-1 0: (bf) r6 = r1 1: R1=ctx(id=0,off=0,imm=0) R6_w=ctx(id=0,off=0,imm=0) R10=fp0,call_-1 1: (61) r7 = *(u32 *)(r6 +28) invalid bpf_context access off=28 size=4 Test case: bind4 load with invalid access: src_ip6 .. [PASS] ... Test case: bind6 allow all .. [PASS] Summary: 16 PASSED, 0 FAILED Some test_sock tests are negative tests and verbose verifier log will be printed out as shown in the above. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* tools/bpf: add missing strings.h includeAndrii Nakryiko2019-02-071-0/+1
| | | | | | | | | | | | | | Few files in libbpf are using bzero() function (defined in strings.h header), but don't include corresponding header. When libbpf is added as a dependency to pahole, this undeterministically causes warnings on some machines: bpf.c:225:2: warning: implicit declaration of function 'bzero' [-Wimplicit-function-declaration] bzero(&attr, sizeof(attr)); ^~~~~ Signed-off-by: Andrii Nakryiko <andriin@fb.com> Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* libbpf: introduce bpf_map_lookup_elem_flags()Alexei Starovoitov2019-02-011-0/+13
| | | | | | | | | Introduce int bpf_map_lookup_elem_flags(int fd, const void *key, void *value, __u64 flags) helper to lookup array/hash/cgroup_local_storage elements with BPF_F_LOCK flag. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* bpf: libbpf: retry loading program on EAGAINLorenz Bauer2019-01-151-4/+15
| | | | | | | | | | Commit c3494801cd17 ("bpf: check pending signals while verifying programs") makes it possible for the BPF_PROG_LOAD to fail with EAGAIN. Retry unconditionally in this case. Fixes: c3494801cd17 ("bpf: check pending signals while verifying programs") Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* bpf: libbpf: Add btf_line_info support to libbpfMartin KaFai Lau2018-12-091-29/+57
| | | | | | | | | | | | | This patch adds bpf_line_info support to libbpf: 1) Parsing the line_info sec from ".BTF.ext" 2) Relocating the line_info. If the main prog *_info relocation fails, it will ignore the remaining subprog line_info and continue. If the subprog *_info relocation fails, it will bail out. 3) BPF_PROG_LOAD a prog with line_info Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpf: libbpf: Refactor and bug fix on the bpf_func_info loading logicMartin KaFai Lau2018-12-091-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch refactor and fix a bug in the libbpf's bpf_func_info loading logic. The bug fix and refactoring are targeting the same commit 2993e0515bb4 ("tools/bpf: add support to read .BTF.ext sections") which is in the bpf-next branch. 1) In bpf_load_program_xattr(), it should retry when errno == E2BIG regardless of log_buf and log_buf_sz. This patch fixes it. 2) btf_ext__reloc_init() and btf_ext__reloc() are essentially the same except btf_ext__reloc_init() always has insns_cnt == 0. Hence, btf_ext__reloc_init() is removed. btf_ext__reloc() is also renamed to btf_ext__reloc_func_info() to get ready for the line_info support in the next patch. 3) Consolidate func_info section logic from "btf_ext_parse_hdr()", "btf_ext_validate_func_info()" and "btf_ext__new()" to a new function "btf_ext_copy_func_info()" such that similar logic can be reused by the later libbpf's line_info patch. 4) The next line_info patch will store line_info_cnt instead of line_info_len in the bpf_program because the kernel is taking line_info_cnt also. It will save a few "len" to "cnt" conversions and will also save some function args. Hence, this patch also makes bpf_program to store func_info_cnt instead of func_info_len. 5) btf_ext depends on btf. e.g. the func_info's type_id in ".BTF.ext" is not useful when ".BTF" is absent. This patch only init the obj->btf_ext pointer after it has successfully init the obj->btf pointer. This can avoid always checking "obj->btf && obj->btf_ext" together for accessing ".BTF.ext". Checking "obj->btf_ext" alone will do. 6) Move "struct btf_sec_func_info" from btf.h to btf.c. There is no external usage outside btf.c. Fixes: 2993e0515bb4 ("tools/bpf: add support to read .BTF.ext sections") Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* libbpf: add bpf_prog_test_run_xattrLorenz Bauer2018-12-041-0/+23
| | | | | | | | | Add a new function, which encourages safe usage of the test interface. bpf_prog_test_run continues to work as before, but should be considered unsafe. Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpf: Add BPF_F_ANY_ALIGNMENT.David Miller2018-11-301-4/+4
| | | | | | | | | | | | | | | | | | | | | Often we want to write tests cases that check things like bad context offset accesses. And one way to do this is to use an odd offset on, for example, a 32-bit load. This unfortunately triggers the alignment checks first on platforms that do not set CONFIG_EFFICIENT_UNALIGNED_ACCESS. So the test case see the alignment failure rather than what it was testing for. It is often not completely possible to respect the original intention of the test, or even test the same exact thing, while solving the alignment issue. Another option could have been to check the alignment after the context and other validations are performed by the verifier, but that is a non-trivial change to the verifier. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpf: libbpf: remove map name retry from bpf_create_map_xattrStanislav Fomichev2018-11-211-10/+1
| | | | | | | | | | | Instead, check for a newly created caps.name bpf_object capability. If kernel doesn't support names, don't specify the attribute. See commit 23499442c319 ("bpf: libbpf: retry map creation without the name") for rationale. Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* tools/bpf: add support to read .BTF.ext sectionsYonghong Song2018-11-201-1/+45
| | | | | | | | | | | | | | | | | | | | | | | | | The .BTF section is already available to encode types. These types can be used for map pretty print. The whole .BTF will be passed to the kernel as well for which kernel can verify and return to the user space for pretty print etc. The llvm patch at https://reviews.llvm.org/D53736 will generate .BTF section and one more section .BTF.ext. The .BTF.ext section encodes function type information and line information. Note that this patch set only supports function type info. The functionality is implemented in libbpf. The .BTF section can be directly loaded into the kernel, and the .BTF.ext section cannot. The loader may need to do some relocation and merging, similar to merging multiple code sections, before loading into the kernel. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>