summaryrefslogtreecommitdiffstats
path: root/tools/lib/bpf
Commit message (Collapse)AuthorAgeFilesLines
* libbpf: Add btf__parse_raw() and generic btf__parse() APIsAndrii Nakryiko2020-08-033-38/+83
| | | | | | | | | | Add public APIs to parse BTF from raw data file (e.g., /sys/kernel/btf/vmlinux), as well as generic btf__parse(), which will try to determine correct format, currently either raw or ELF. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200802013219.864880-2-andriin@fb.com
* libbpf: Add bpf_link detach APIsAndrii Nakryiko2020-08-015-0/+20
| | | | | | | | | | | Add low-level bpf_link_detach() API. Also add higher-level bpf_link__detach() one. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200731182830.286260-3-andriin@fb.com
* libbpf: Fix register in PT_REGS MIPS macrosJerry Crunchtime2020-07-311-2/+2
| | | | | | | | | | | | The o32, n32 and n64 calling conventions require the return value to be stored in $v0 which maps to $2 register, i.e., the register 2. Fixes: c1932cd ("bpf: Add MIPS support to samples/bpf.") Signed-off-by: Jerry Crunchtime <jerry.c.t@web.de> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/43707d31-0210-e8f0-9226-1af140907641@web.de
* libbpf: Make destructors more robust by handling ERR_PTR(err) casesAndrii Nakryiko2020-07-313-8/+7
| | | | | | | | | | | | | | | Most of libbpf "constructors" on failure return ERR_PTR(err) result encoded as a pointer. It's a common mistake to eventually pass such malformed pointers into xxx__destroy()/xxx__free() "destructors". So instead of fixing up clean up code in selftests and user programs, handle such error pointers in destructors themselves. This works beautifully for NULL pointers passed to destructors, so might as well just work for error pointers. Suggested-by: Song Liu <songliubraving@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200729232148.896125-1-andriin@fb.com
* libbpf: Add support for BPF XDP linkAndrii Nakryiko2020-07-253-1/+11
| | | | | | | | | | | | | | Sync UAPI header and add support for using bpf_link-based XDP attachment. Make xdp/ prog type set expected attach type. Kernel didn't enforce attach_type for XDP programs before, so there is no backwards compatiblity issues there. Also fix section_names selftest to recognize that xdp prog types now have expected attach type. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200722064603.3350758-8-andriin@fb.com
* libbpf: Print hint when PERF_EVENT_IOC_SET_BPF returns -EPROTOSong Liu2020-07-251-0/+3
| | | | | | | | | | | The kernel prevents potential unwinder warnings and crashes by blocking BPF program with bpf_get_[stack|stackid] on perf_event without PERF_SAMPLE_CALLCHAIN, or with exclude_callchain_[kernel|user]. Print a hint message in libbpf to help the user debug such issues. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200723180648.1429892-4-songliubraving@fb.com
* tools/libbpf: Add support for bpf map element iteratorYonghong Song2020-07-254-3/+14
| | | | | | | | | | | | Add map_fd to bpf_iter_attach_opts and flags to bpf_link_create_opts. Later on, bpftool or selftest will be able to create a bpf map element iterator by passing map_fd to the kernel during link creation time. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200723184117.590673-1-yhs@fb.com
* libbpf bpf_helpers: Use __builtin_offsetof for offsetofIan Rogers2020-07-211-1/+1
| | | | | | | | | | | | | | The non-builtin route for offsetof has a dependency on size_t from stdlib.h/stdint.h that is undeclared and may break targets. The offsetof macro in bpf_helpers may disable the same macro in other headers that have a #ifdef offsetof guard. Rather than add additional dependencies improve the offsetof macro declared here to use the builtin that is available since llvm 3.7 (the first with a BPF backend). Signed-off-by: Ian Rogers <irogers@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200720061741.1514673-1-irogers@google.com
* libbpf: Add support for SK_LOOKUP program typeJakub Sitnicki2020-07-174-0/+10
| | | | | | | | | | Make libbpf aware of the newly added program type, and assign it a section name. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200717103536.397595-13-jakub@cloudflare.com
* libbpf: Add SEC name for xdp programs attached to CPUMAPLorenzo Bianconi2020-07-161-0/+2
| | | | | | | | | | | | As for DEVMAP, support SEC("xdp_cpumap/") as a short cut for loading the program with type BPF_PROG_TYPE_XDP and expected attach type BPF_XDP_CPUMAP. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/33174c41993a6d860d9c7c1f280a2477ee39ed11.1594734381.git.lorenzo@kernel.org
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller2020-07-135-74/+102
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Alexei Starovoitov says: ==================== pull-request: bpf-next 2020-07-13 The following pull-request contains BPF updates for your *net-next* tree. We've added 36 non-merge commits during the last 7 day(s) which contain a total of 62 files changed, 2242 insertions(+), 468 deletions(-). The main changes are: 1) Avoid trace_printk warning banner by switching bpf_trace_printk to use its own tracing event, from Alan. 2) Better libbpf support on older kernels, from Andrii. 3) Additional AF_XDP stats, from Ciara. 4) build time resolution of BTF IDs, from Jiri. 5) BPF_CGROUP_INET_SOCK_RELEASE hook, from Stanislav. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * tools/bpftool: Strip away modifiers from global variablesAndrii Nakryiko2020-07-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Reliably remove all the type modifiers from read-only (.rodata) global variable definitions, including cases of inner field const modifiers and arrays of const values. Also modify one of selftests to ensure that const volatile struct doesn't prevent user-space from modifying .rodata variable. Fixes: 985ead416df3 ("bpftool: Add skeleton codegen command") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200713232409.3062144-3-andriin@fb.com
| * libbpf: Support stripping modifiers for btf_dumpAndrii Nakryiko2020-07-132-2/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | One important use case when emitting const/volatile/restrict is undesirable is BPF skeleton generation of DATASEC layout. These are further memory-mapped and can be written/read from user-space directly. For important case of .rodata variables, bpftool strips away first-level modifiers, to make their use on user-space side simple and not requiring extra type casts to override compiler complaining about writing to const variables. This logic works mostly fine, but breaks in some more complicated cases. E.g.: const volatile int params[10]; Because in BTF it's a chain of ARRAY -> CONST -> VOLATILE -> INT, bpftool stops at ARRAY and doesn't strip CONST and VOLATILE. In skeleton this variable will be emitted as is. So when used from user-space, compiler will complain about writing to const array. This is problematic, as also mentioned in [0]. To solve this for arrays and other non-trivial cases (e.g., inner const/volatile fields inside the struct), teach btf_dump to strip away any modifier, when requested. This is done as an extra option on btf_dump__emit_type_decl() API. Reported-by: Anton Protopopov <a.s.protopopov@gmail.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200713232409.3062144-2-andriin@fb.com
| * libbpf: Fix memory leak and optimize BTF sanitizationAndrii Nakryiko2020-07-103-10/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Coverity's static analysis helpfully reported a memory leak introduced by 0f0e55d8247c ("libbpf: Improve BTF sanitization handling"). While fixing it, I realized that btf__new() already creates a memory copy, so there is no need to do this. So this patch also fixes misleading btf__new() signature to make data into a `const void *` input parameter. And it avoids unnecessary memory allocation and copy in BTF sanitization code altogether. Fixes: 0f0e55d8247c ("libbpf: Improve BTF sanitization handling") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200710011023.1655008-1-andriin@fb.com
| * libbpf: Handle missing BPF_OBJ_GET_INFO_BY_FD gracefully in perf_bufferAndrii Nakryiko2020-07-091-11/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | perf_buffer__new() is relying on BPF_OBJ_GET_INFO_BY_FD availability for few sanity checks. OBJ_GET_INFO for maps is actually much more recent feature than perf_buffer support itself, so this causes unnecessary problems on old kernels before BPF_OBJ_GET_INFO_BY_FD was added. This patch makes those sanity checks optional and just assumes best if command is not supported. If user specified something incorrectly (e.g., wrong map type), kernel will reject it later anyway, except user won't get a nice explanation as to why it failed. This seems like a good trade off for supporting perf_buffer on old kernels. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200708015318.3827358-6-andriin@fb.com
| * libbpf: Improve BTF sanitization handlingAndrii Nakryiko2020-07-091-45/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change sanitization process to preserve original BTF, which might be used by libbpf itself for Kconfig externs, CO-RE relocs, etc, even if kernel is old and doesn't support BTF. To achieve that, if libbpf detects the need for BTF sanitization, it would clone original BTF, sanitize it in-place, attempt to load it into kernel, and if successful, will preserve loaded BTF FD in original `struct btf`, while freeing sanitized local copy. If kernel doesn't support any BTF, original btf and btf_ext will still be preserved to be used later for CO-RE relocation and other BTF-dependent libbpf features, which don't dependon kernel BTF support. Patch takes care to not specify BTF and BTF.ext features when loading BPF programs and/or maps, if it was detected that kernel doesn't support BTF features. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200708015318.3827358-4-andriin@fb.com
| * libbpf: Add btf__set_fd() for more control over loaded BTF FDAndrii Nakryiko2020-07-093-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add setter for BTF FD to allow application more fine-grained control in more advanced scenarios. Storing BTF FD inside `struct btf` provides little benefit and probably would be better done differently (e.g., btf__load() could just return FD on success), but we are stuck with this due to backwards compatibility. The main problem is that it's impossible to load BTF and than free user-space memory, but keep FD intact, because `struct btf` assumes ownership of that FD upon successful load and will attempt to close it during btf__free(). To allow callers (e.g., libbpf itself for BTF sanitization) to have more control over this, add btf__set_fd() to allow to reset FD arbitrarily, if necessary. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200708015318.3827358-3-andriin@fb.com
| * libbpf: Make BTF finalization strictAndrii Nakryiko2020-07-091-12/+4
| | | | | | | | | | | | | | | | | | | | | | With valid ELF and valid BTF, there is no reason (apart from bugs) why BTF finalization should fail. So make it strict and return error if it fails. This makes CO-RE relocation more reliable, as they are not going to be just silently skipped, if BTF finalization failed. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200708015318.3827358-2-andriin@fb.com
| * libbpf: Add support for BPF_CGROUP_INET_SOCK_RELEASEStanislav Fomichev2020-07-081-0/+4
| | | | | | | | | | | | | | | | | | Add auto-detection for the cgroup/sock_release programs. Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200706230128.4073544-3-sdf@google.com
* | Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller2020-07-113-6/+18
|\ \ | |/ |/| | | | | | | | | All conflicts seemed rather trivial, with some guidance from Saeed Mameed on the tc_ct.c one. Signed-off-by: David S. Miller <davem@davemloft.net>
| * libbpf: Fix libbpf hashmap on (I)LP32 architecturesJakub Bogusz2020-07-091-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | On ILP32, 64-bit result was shifted by value calculated for 32-bit long type and returned value was much outside hashmap capacity. As advised by Andrii Nakryiko, this patch uses different hashing variant for architectures with size_t shorter than long long. Fixes: e3b924224028 ("libbpf: add resizable non-thread safe internal hashmap") Signed-off-by: Jakub Bogusz <qboosh@pld-linux.org> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200709225723.1069937-1-andriin@fb.com
| * libbpf: Adjust SEC short cut for expected attach type BPF_XDP_DEVMAPJesper Dangaard Brouer2020-06-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Adjust the SEC("xdp_devmap/") prog type prefix to contain a slash "/" for expected attach type BPF_XDP_DEVMAP. This is consistent with other prog types like tracing. Fixes: 2778797037a6 ("libbpf: Add SEC name for xdp programs attached to device map") Suggested-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/159309521882.821855.6873145686353617509.stgit@firesoul
| * libbpf: Fix CO-RE relocs against .text sectionAndrii Nakryiko2020-06-231-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | bpf_object__find_program_by_title(), used by CO-RE relocation code, doesn't return .text "BPF program", if it is a function storage for sub-programs. Because of that, any CO-RE relocation in helper non-inlined functions will fail. Fix this by searching for .text-corresponding BPF program manually. Adjust one of bpf_iter selftest to exhibit this pattern. Fixes: ddc7c3042614 ("libbpf: implement BPF CO-RE offset relocation algorithm") Reported-by: Yonghong Song <yhs@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200619230423.691274-1-andriin@fb.com
| * libbpf: Forward-declare bpf_stats_type for systems with outdated UAPI headersAndrii Nakryiko2020-06-221-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | Systems that doesn't yet have the very latest linux/bpf.h header, enum bpf_stats_type will be undefined, causing compilation warnings. Prevents this by forward-declaring enum. Fixes: 0bee106716cf ("libbpf: Add support for command BPF_ENABLE_STATS") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200621031159.2279101-1-andriin@fb.com
* | libbpf: Make bpf_endian co-exist with vmlinux.hAndrii Nakryiko2020-07-011-8/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make bpf_endian.h compatible with vmlinux.h. It is a frequent request from users wanting to use bpf_endian.h in their BPF applications using CO-RE and vmlinux.h. To achieve that, re-implement byte swap macros and drop all the header includes. This way it can be used both with linux header includes, as well as with a vmlinux.h. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200630152125.3631920-2-andriin@fb.com
* | libbpf: Support disabling auto-loading BPF programsAndrii Nakryiko2020-06-283-8/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, bpf_object__load() (and by induction skeleton's load), will always attempt to prepare, relocate, and load into kernel every single BPF program found inside the BPF object file. This is often convenient and the right thing to do and what users expect. But there are plenty of cases (especially with BPF development constantly picking up the pace), where BPF application is intended to work with old kernels, with potentially reduced set of features. But on kernels supporting extra features, it would like to take a full advantage of them, by employing extra BPF program. This could be a choice of using fentry/fexit over kprobe/kretprobe, if kernel is recent enough and is built with BTF. Or BPF program might be providing optimized bpf_iter-based solution that user-space might want to use, whenever available. And so on. With libbpf and BPF CO-RE in particular, it's advantageous to not have to maintain two separate BPF object files to achieve this. So to enable such use cases, this patch adds ability to request not auto-loading chosen BPF programs. In such case, libbpf won't attempt to perform relocations (which might fail due to old kernel), won't try to resolve BTF types for BTF-aware (tp_btf/fentry/fexit/etc) program types, because BTF might not be present, and so on. Skeleton will also automatically skip auto-attachment step for such not loaded BPF programs. Overall, this feature allows to simplify development and deployment of real-world BPF applications with complicated compatibility requirements. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200625232629.3444003-2-andriin@fb.com
* | libbpf: Prevent loading vmlinux BTF twiceAndrii Nakryiko2020-06-241-11/+22
| | | | | | | | | | | | | | | | | | | | Prevent loading/parsing vmlinux BTF twice in some cases: for CO-RE relocations and for BTF-aware hooks (tp_btf, fentry/fexit, etc). Fixes: a6ed02cac690 ("libbpf: Load btf_vmlinux only once per object.") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200624043805.1794620-1-andriin@fb.com
* | libbpf: Fix spelling mistake "kallasyms" -> "kallsyms"Colin Ian King2020-06-241-1/+1
| | | | | | | | | | | | | | | | | | There is a spelling mistake in a pr_warn message. Fix it. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200623084207.149253-1-colin.king@canonical.com
* | libbpf: Wrap source argument of BPF_CORE_READ macro in parenthesesAndrii Nakryiko2020-06-221-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | Wrap source argument of BPF_CORE_READ family of macros into parentheses to allow uses like this: BPF_CORE_READ((struct cast_struct *)src, a, b, c); Fixes: 7db3822ab991 ("libbpf: Add BPF_CORE_READ/BPF_CORE_READ_INTO helpers") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200619231703.738941-8-andriin@fb.com
* | libbpf: Add support for extracting kernel symbol addressesAndrii Nakryiko2020-06-223-6/+144
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for another (in addition to existing Kconfig) special kind of externs in BPF code, kernel symbol externs. Such externs allow BPF code to "know" kernel symbol address and either use it for comparisons with kernel data structures (e.g., struct file's f_op pointer, to distinguish different kinds of file), or, with the help of bpf_probe_user_kernel(), to follow pointers and read data from global variables. Kernel symbol addresses are found through /proc/kallsyms, which should be present in the system. Currently, such kernel symbol variables are typeless: they have to be defined as `extern const void <symbol>` and the only operation you can do (in C code) with them is to take its address. Such extern should reside in a special section '.ksyms'. bpf_helpers.h header provides __ksym macro for this. Strong vs weak semantics stays the same as with Kconfig externs. If symbol is not found in /proc/kallsyms, this will be a failure for strong (non-weak) extern, but will be defaulted to 0 for weak externs. If the same symbol is defined multiple times in /proc/kallsyms, then it will be error if any of the associated addresses differs. In that case, address is ambiguous, so libbpf falls on the side of caution, rather than confusing user with randomly chosen address. In the future, once kernel is extended with variables BTF information, such ksym externs will be supported in a typed version, which will allow BPF program to read variable's contents directly, similarly to how it's done for fentry/fexit input arguments. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Hao Luo <haoluo@google.com> Link: https://lore.kernel.org/bpf/20200619231703.738941-3-andriin@fb.com
* | libbpf: Generalize libbpf externs supportAndrii Nakryiko2020-06-221-140/+206
| | | | | | | | | | | | | | | | | | | | | | | | Switch existing Kconfig externs to be just one of few possible kinds of more generic externs. This refactoring is in preparation for ksymbol extern support, added in the follow up patch. There are no functional changes intended. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Hao Luo <haoluo@google.com> Link: https://lore.kernel.org/bpf/20200619231703.738941-2-andriin@fb.com
* | libbpf: Add a bunch of attribute getters/setters for map definitionsAndrii Nakryiko2020-06-233-10/+134
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a bunch of getter for various aspects of BPF map. Some of these attribute (e.g., key_size, value_size, type, etc) are available right now in struct bpf_map_def, but this patch adds getter allowing to fetch them individually. bpf_map_def approach isn't very scalable, when ABI stability requirements are taken into account. It's much easier to extend libbpf and add support for new features, when each aspect of BPF map has separate getter/setter. Getters follow the common naming convention of not explicitly having "get" in its name: bpf_map__type() returns map type, bpf_map__key_size() returns key_size. Setters, though, explicitly have set in their name: bpf_map__set_type(), bpf_map__set_key_size(). This patch ensures we now have a getter and a setter for the following map attributes: - type; - max_entries; - map_flags; - numa_node; - key_size; - value_size; - ifindex. bpf_map__resize() enforces unnecessary restriction of max_entries > 0. It is unnecessary, because libbpf actually supports zero max_entries for some cases (e.g., for PERF_EVENT_ARRAY map) and treats it specially during map creation time. To allow setting max_entries=0, new bpf_map__set_max_entries() setter is added. bpf_map__resize()'s behavior is preserved for backwards compatibility reasons. Map ifindex getter is added as well. There is a setter already, but no corresponding getter. Fix this assymetry as well. bpf_map__set_ifindex() itself is converted from void function into error-returning one, similar to other setters. The only error returned right now is -EBUSY, if BPF map is already loaded and has corresponding FD. One lacking attribute with no ability to get/set or even specify it declaratively is numa_node. This patch fixes this gap and both adds programmatic getter/setter, as well as adds support for numa_node field in BTF-defined map. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20200621062112.3006313-1-andriin@fb.com
* | libbpf: Bump version to 0.1.0Andrii Nakryiko2020-06-171-0/+3
|/ | | | | | | | Bump libbpf version to 0.1.0, as new development cycle starts. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200617183132.1970836-1-andriin@fb.com
* libbpf: Support pre-initializing .bss global variablesAndrii Nakryiko2020-06-121-4/+0
| | | | | | | | | | | | Remove invalid assumption in libbpf that .bss map doesn't have to be updated in kernel. With addition of skeleton and memory-mapped initialization image, .bss doesn't have to be all zeroes when BPF map is created, because user-code might have initialized those variables from user-space. Fixes: eba9c5f498a1 ("libbpf: Refactor global data map initialization") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200612194504.557844-1-andriin@fb.com
* libbpf: Handle GCC noreturn-turned-volatile quirkAndrii Nakryiko2020-06-101-9/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Handle a GCC quirk of emitting extra volatile modifier in DWARF (and subsequently preserved in BTF by pahole) for function pointers marked as __attribute__((noreturn)). This was the way to mark such functions before GCC 2.5 added noreturn attribute. Drop such func_proto modifiers, similarly to how it's done for array (also to handle GCC quirk/bug). Such volatile attribute is emitted by GCC only, so existing selftests can't express such test. Simple repro is like this (compiled with GCC + BTF generated by pahole): struct my_struct { void __attribute__((noreturn)) (*fn)(int); }; struct my_struct a; Without this fix, output will be: struct my_struct { voidvolatile (*fn)(int); }; With the fix: struct my_struct { void (*fn)(int); }; Fixes: 351131b51c7a ("libbpf: add btf_dump API for BTF-to-C conversion") Reported-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/bpf/20200610052335.2862559-1-andriin@fb.com
* libbpf: Define __WORDSIZE if not availableArnaldo Carvalho de Melo2020-06-101-4/+3
| | | | | | | | | | | | | | | | Some systems, such as Android, don't have a define for __WORDSIZE, do it in terms of __SIZEOF_LONG__, as done in perf since 2012: http://git.kernel.org/torvalds/c/3f34f6c0233ae055b5 For reference: https://gcc.gnu.org/onlinedocs/cpp/Common-Predefined-Macros.html I build tested it here and Andrii did some Travis CI build tests too. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200608161150.GA3073@kernel.org
* libbpf: Add support for bpf_link-based netns attachmentJakub Sitnicki2020-06-013-5/+21
| | | | | | | | | | Add bpf_program__attach_nets(), which uses LINK_CREATE subcommand to create an FD-based kernel bpf_link, for attach types tied to network namespace, that is BPF_FLOW_DISSECTOR for the moment. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200531082846.2117903-7-jakub@cloudflare.com
* libbpf: Add _GNU_SOURCE for reallocarray to ringbuf.cAndrii Nakryiko2020-06-011-0/+3
| | | | | | | | | | | | On systems with recent enough glibc, reallocarray compat won't kick in, so reallocarray() itself has to come from stdlib.h include. But _GNU_SOURCE is necessary to enable it. So add it. Fixes: bf99c936f947 ("libbpf: Add BPF ring buffer support") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200601202601.2139477-1-andriin@fb.com
* libbpf: Add SEC name for xdp programs attached to device mapDavid Ahern2020-06-011-0/+2
| | | | | | | | | | | Support SEC("xdp_devmap*") as a short cut for loading the program with type BPF_PROG_TYPE_XDP and expected attach type BPF_XDP_DEVMAP. Signed-off-by: David Ahern <dsahern@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20200529220716.75383-5-dsahern@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* libbpf: Add BPF ring buffer supportAndrii Nakryiko2020-06-015-1/+317
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Declaring and instantiating BPF ring buffer doesn't require any changes to libbpf, as it's just another type of maps. So using existing BTF-defined maps syntax with __uint(type, BPF_MAP_TYPE_RINGBUF) and __uint(max_elements, <size-of-ring-buf>) is all that's necessary to create and use BPF ring buffer. This patch adds BPF ring buffer consumer to libbpf. It is very similar to perf_buffer implementation in terms of API, but also attempts to fix some minor problems and inconveniences with existing perf_buffer API. ring_buffer support both single ring buffer use case (with just using ring_buffer__new()), as well as allows to add more ring buffers, each with its own callback and context. This allows to efficiently poll and consume multiple, potentially completely independent, ring buffers, using single epoll instance. The latter is actually a problem in practice for applications that are using multiple sets of perf buffers. They have to create multiple instances for struct perf_buffer and poll them independently or in a loop, each approach having its own problems (e.g., inability to use a common poll timeout). struct ring_buffer eliminates this problem by aggregating many independent ring buffer instances under the single "ring buffer manager". Second, perf_buffer's callback can't return error, so applications that need to stop polling due to error in data or data signalling the end, have to use extra mechanisms to signal that polling has to stop. ring_buffer's callback can return error, which will be passed through back to user code and can be acted upon appropariately. Two APIs allow to consume ring buffer data: - ring_buffer__poll(), which will wait for data availability notification and will consume data only from reported ring buffer(s); this API allows to efficiently use resources by reading data only when it becomes available; - ring_buffer__consume(), will attempt to read new records regardless of data availablity notification sub-system. This API is useful for cases when lowest latency is required, in expense of burning CPU resources. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200529075424.3139988-3-andriin@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* libbpf: Fix perf_buffer__free() API for sparse allocsEelco Chaudron2020-06-011-1/+4
| | | | | | | | | | | | In case the cpu_bufs are sparsely allocated they are not all free'ed. These changes will fix this. Fixes: fb84b8224655 ("libbpf: add perf buffer API") Signed-off-by: Eelco Chaudron <echaudro@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/159056888305.330763.9684536967379110349.stgit@ebuild Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* libbpf: Use .so dynamic symbols for abi checkYauheni Kaliuta2020-06-011-2/+2
| | | | | | | | | | | | | | Since dynamic symbols are used for dynamic linking it makes sense to use them (readelf --dyn-syms) for abi check. Found with some configuration on powerpc where linker puts local *.plt_call.* symbols into .so. Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200525061846.16524-1-yauheni.kaliuta@redhat.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* libbpf: Install headers as part of make installNikolay Borisov2020-06-011-1/+1
| | | | | | | | | | | | Current 'make install' results in only pkg-config and library binaries being installed. For consistency also install headers as part of "make install" Signed-off-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200526174612.5447-1-nborisov@suse.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* libbpf: Add API to consume the perf ring buffer contentEelco Chaudron2020-06-013-0/+21
| | | | | | | | | | | | | | | | | This new API, perf_buffer__consume, can be used as follows: - When you have a perf ring where wakeup_events is higher than 1, and you have remaining data in the rings you would like to pull out on exit (or maybe based on a timeout). - For low latency cases where you burn a CPU that constantly polls the queues. Signed-off-by: Eelco Chaudron <echaudro@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/159048487929.89441.7465713173442594608.stgit@ebuild Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpf, libbpf: Enable get{peer, sock}name attach typesDaniel Borkmann2020-05-191-0/+8
| | | | | | | | | | | Trivial patch to add the new get{peer,sock}name attach types to the section definitions in order to hook them up to sock_addr cgroup program type. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Andrey Ignatov <rdna@fb.com> Link: https://lore.kernel.org/bpf/7fcd4b1e41a8ebb364754a5975c75a7795051bd2.1589841594.git.daniel@iogearbox.net
* libbpf, hashmap: Fix signedness warningsIan Rogers2020-05-161-3/+2
| | | | | | | | | | | | | | | | | Fixes the following warnings: hashmap.c: In function ‘hashmap__clear’: hashmap.h:150:20: error: comparison of integer expressions of different signedness: ‘int’ and ‘size_t’ {aka ‘long unsigned int’} [-Werror=sign-compare] 150 | for (bkt = 0; bkt < map->cap; bkt++) \ hashmap.c: In function ‘hashmap_grow’: hashmap.h:150:20: error: comparison of integer expressions of different signedness: ‘int’ and ‘size_t’ {aka ‘long unsigned int’} [-Werror=sign-compare] 150 | for (bkt = 0; bkt < map->cap; bkt++) \ Signed-off-by: Ian Rogers <irogers@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200515165007.217120-4-irogers@google.com
* libbpf, hashmap: Remove unused #includeIan Rogers2020-05-161-1/+0
| | | | | | | | | | | | Remove #include of libbpf_internal.h that is unused. Discussed in this thread: https://lore.kernel.org/lkml/CAEf4BzZRmiEds_8R8g4vaAeWvJzPb4xYLnpF0X2VNY8oTzkphQ@mail.gmail.com/ Signed-off-by: Ian Rogers <irogers@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200515165007.217120-3-irogers@google.com
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller2020-05-151-2/+2
|\ | | | | | | | | | | | | | | | | | | Move the bpf verifier trace check into the new switch statement in HEAD. Resolve the overlapping changes in hinic, where bug fixes overlap the addition of VF support. Signed-off-by: David S. Miller <davem@davemloft.net>
| * libbpf: Fix register naming in PT_REGS s390 macrosSumanth Korikkar2020-05-141-2/+2
| | | | | | | | | | | | | | | | | | | | | | Fix register naming in PT_REGS s390 macros Fixes: b8ebce86ffe6 ("libbpf: Provide CO-RE variants of PT_REGS macros") Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200513154414.29972-1-sumanthk@linux.ibm.com
* | bpf: Change btf_iter func proto prefix to "bpf_iter_"Yonghong Song2020-05-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | This is to be consistent with tracing and lsm programs which have prefix "bpf_trace_" and "bpf_lsm_" respectively. Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200513180216.2949387-1-yhs@fb.com