| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
There's limit of 40 programs tht can be attached
to trampoline for one function. Adding test that
tries to attach that many plus one extra that needs
to fail.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200123161508.915203-4-jolsa@kernel.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The reuseport tests currently suffer from a race condition: FIN
packets count towards DROP_ERR_SKB_DATA, since they don't contain
a valid struct cmd. Tests will spuriously fail depending on whether
check_results is called before or after the FIN is processed.
Exit the BPF program early if FIN is set.
Fixes: 91134d849a0e ("bpf: Test BPF_PROG_TYPE_SK_REUSEPORT")
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200124112754.19664-3-lmb@cloudflare.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds a bpf_cubic example. Some highlights:
1. CONFIG_HZ .kconfig map is used.
2. In bictcp_update(), calculation is changed to use usec
resolution (i.e. USEC_PER_JIFFY) instead of using jiffies.
Thus, usecs_to_jiffies() is not used in the bpf_cubic.c.
3. In bitctcp_update() [under tcp_friendliness], the original
"while (ca->ack_cnt > delta)" loop is changed to the equivalent
"ca->ack_cnt / delta" operation.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200122233658.903774-1-kafai@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add program extension tests that build on top of fexit_bpf2bpf tests.
Replace three global functions in previously loaded test_pkt_access.c program
with three new implementations:
int get_skb_len(struct __sk_buff *skb);
int get_constant(long val);
int get_skb_ifindex(int val, struct __sk_buff *skb, int var);
New function return the same results as original only if arguments match.
new_get_skb_ifindex() demonstrates that 'skb' argument doesn't have to be first
and only argument of BPF program. All normal skb based accesses are available.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20200121005348.2769920-4-ast@kernel.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix all selftests to include libbpf header files with the bpf/ prefix, to
be consistent with external users of the library. Also ensure that all
includes of exported libbpf header files (those that are exported on 'make
install' of the library) use bracketed includes instead of quoted.
To not break the build, keep the old include path until everything has been
changed to the new one; a subsequent patch will remove that.
Fixes: 6910d7d3867a ("selftests/bpf: Ensure bpf_helper_defs.h are taken from selftests dir")
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157952560568.1683545.9649335788846513446.stgit@toke.dk
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
program
Add a test that will attach a FENTRY and FEXIT program to the XDP test
program. It will also verify data from the XDP context on FENTRY and
verifies the return code on exit.
Signed-off-by: Eelco Chaudron <echaudro@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157909410480.47481.11202505690938004673.stgit@xdp-tutorial
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test_progs send_signal() is amended to test
bpf_send_signal_thread() as well.
$ ./test_progs -n 40
#40/1 send_signal_tracepoint:OK
#40/2 send_signal_perf:OK
#40/3 send_signal_nmi:OK
#40/4 send_signal_tracepoint_thread:OK
#40/5 send_signal_perf_thread:OK
#40/6 send_signal_nmi_thread:OK
#40 send_signal:OK
Summary: 1/6 PASSED, 0 SKIPPED, 0 FAILED
Also took this opportunity to rewrite the send_signal test
using skeleton framework and array mmap to make code
simpler and more readable.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200115035003.602425-1-yhs@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Streamline BPF_TRACE_x macro by moving out return type and section attribute
definition out of macro itself. That makes those function look in source code
similar to other BPF programs. Additionally, simplify its usage by determining
number of arguments automatically (so just single BPF_TRACE vs a family of
BPF_TRACE_1, BPF_TRACE_2, etc). Also, allow more natural function argument
syntax without commas inbetween argument type and name.
Given this helper is useful not only for tracing tp_btf/fenty/fexit programs,
but could be used for LSM programs and others following the same pattern,
rename BPF_TRACE macro into more generic BPF_PROG. Existing BPF_TRACE_x
usages in selftests are converted to new BPF_PROG macro.
Following the same pattern, define BPF_KPROBE and BPF_KRETPROBE macros for
nicer usage of kprobe/kretprobe arguments, respectively. BPF_KRETPROBE, adopts
same convention used by fexit programs, that last defined argument is probed
function's return result.
v4->v5:
- fix test_overhead test (__set_task_comm is void) (Alexei);
v3->v4:
- rebased and fixed one more BPF_TRACE_x occurence (Alexei);
v2->v3:
- rename to shorter and as generic BPF_PROG (Alexei);
v1->v2:
- verified GCC handles pragmas as expected;
- added descriptions to macros;
- converted new STRUCT_OPS selftest to BPF_HANDLER (worked as expected);
- added original context as 'ctx' parameter, for cases where it has to be
passed into BPF helpers. This might cause an accidental naming collision,
unfortunately, but at least it's easy to work around. Fortunately, this
situation produces quite legible compilation error:
progs/bpf_dctcp.c:46:6: error: redefinition of 'ctx' with a different type: 'int' vs 'unsigned long long *'
int ctx = 123;
^
progs/bpf_dctcp.c:42:6: note: previous definition is here
void BPF_HANDLER(dctcp_init, struct sock *sk)
^
./bpf_trace_helpers.h:58:32: note: expanded from macro 'BPF_HANDLER'
____##name(unsigned long long *ctx, ##args)
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20200110211634.1614739-1-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
test_global_func[12] - check 512 stack limit.
test_global_func[34] - check 8 frame call chain limit.
test_global_func5 - check that non-ctx pointer cannot be passed into
a function that expects context.
test_global_func6 - check that ctx pointer is unmodified.
test_global_func7 - check that global function returns scalar.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200110064124.1760511-7-ast@kernel.org
|
|
|
|
|
|
|
|
|
|
|
| |
Make two static functions in test_xdp_noinline.c global:
before: processed 2790 insns
after: processed 2598 insns
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200110064124.1760511-6-ast@kernel.org
|
|
|
|
|
|
|
|
|
|
|
| |
test results:
pyperf50 with always_inlined the same function five times: processed 46378 insns
pyperf50 with global function: processed 6102 insns
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200110064124.1760511-5-ast@kernel.org
|
|
|
|
|
|
|
|
|
|
| |
Add simple fexit prog type to skb prog type test when subprogram is a global
function.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200110064124.1760511-4-ast@kernel.org
|
|
|
|
|
|
|
|
|
| |
This patch adds a bpf_dctcp example. It currently does not do
no-ECN fallback but the same could be done through the cgrp2-bpf.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200109003517.3856825-1-kafai@fb.com
|
|
|
|
|
|
|
|
|
|
| |
Move Kconfig-provided externs into custom .kconfig section. Add __kconfig into
bpf_helpers.h for user convenience. Update selftests accordingly.
Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191219002837.3074619-2-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
| |
allow to pass skb's mark field into bpf_prog_test_run ctx
for BPF_PROG_TYPE_SCHED_CLS prog type. that would allow
to test bpf programs which are doing decision based on this
field
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
|
|
|
|
|
|
| |
Add few tests validation CO-RE relocation handling of flexible array accesses.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191215070844.1014385-3-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a set of tests validating libbpf-provided extern variables. One crucial
feature that's tested is dead code elimination together with using invalid BPF
helper. CONFIG_MISSING is not supposed to exist and should always be specified
by libbpf as zero, which allows BPF verifier to correctly do branch pruning
and not fail validation, when invalid BPF helper is called from dead if branch.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191214014710.3449601-5-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
|
| |
Add a simple selftests validating datasection-to-struct layour dumping. Global
variables are constructed in such a way as to cause both natural and
artificial padding (through custom alignment requirement).
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-17-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
| |
Add BPF skeleton generation to selftest/bpf's Makefile. Convert attach_probe.c
to use skeleton.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-15-andriin@fb.com
|
|
|
|
|
|
|
|
|
| |
Make sure we can pass arbitrary data in wire_len/gso_segs.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191213223028.161282-2-sdf@google.com
|
|
|
|
|
|
|
|
|
| |
Now that libbpf can recognize SK_REUSEPORT programs, we no longer have to
pass a prog_type hint before loading the object file.
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212102259.418536-3-jakub@cloudflare.com
|
|
|
|
|
|
|
|
|
| |
The existing fexit_bpf2bpf test covers the target progrm with callees.
This patch added a test for the target program without callees.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191205010607.177904-1-yhs@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It looks like BPF program that handles BPF_SOCK_OPS_STATE_CB state
can race with the bpf_map_lookup_elem("global_map"); I sometimes
see the failures in this test and re-running helps.
Since we know that we expect the callback to be called 3 times (one
time for listener socket, two times for both ends of the connection),
let's export this number and add simple retry logic around that.
Also, let's make EXPECT_EQ() not return on failure, but continue
evaluating all conditions; that should make potential debugging
easier.
With this fix in place I don't observe the flakiness anymore.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Cc: Lawrence Brakmo <brakmo@fb.com>
Link: https://lore.kernel.org/bpf/20191204190955.170934-1-sdf@google.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similarly to a0d7da26ce86 ("libbpf: Fix call relocation offset calculation
bug"), relocations against global variables need to take into account
referenced symbol's st_value, which holds offset into a corresponding data
section (and, subsequently, offset into internal backing map). For static
variables this offset is always zero and data offset is completely described
by respective instruction's imm field.
Convert a bunch of selftests to global variables. Previously they were relying
on `static volatile` trick to ensure Clang doesn't inline static variables,
which with global variables is not necessary anymore.
Fixes: 393cdfbee809 ("libbpf: Support initialized global variables")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20191127200651.1381348-1-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For BPF_PROG_TYPE_TRACING, the bpf_prog's ctx is an array of u64.
This patch borrows the idea from BPF_CALL_x in filter.h to
convert a u64 to the arg type of the traced function.
The new BPF_TRACE_x has an arg to specify the return type of a bpf_prog.
It will be used in the future TCP-ops bpf_prog that may return "void".
The new macros are defined in the new header file "bpf_trace_helpers.h".
It is under selftests/bpf/ for now. It could be moved to libbpf later
after seeing more upcoming non-tracing use cases.
The tests are changed to use these new macros also. Hence,
the k[s]u8/16/32/64 are no longer needed and they are removed
from the bpf_helpers.h.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191123202504.1502696-1-kafai@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add several BPF kselftest cases for tail calls which test the various
patch directions, and that multiple locations are patched in same and
different programs.
# ./test_progs -n 45
#45/1 tailcall_1:OK
#45/2 tailcall_2:OK
#45/3 tailcall_3:OK
#45/4 tailcall_4:OK
#45/5 tailcall_5:OK
#45 tailcalls:OK
Summary: 1/5 PASSED, 0 SKIPPED, 0 FAILED
I've also verified the JITed dump after each of the rewrite cases that
it matches expectations.
Also regular test_verifier suite passes fine which contains further tail
call tests:
# ./test_verifier
[...]
Summary: 1563 PASSED, 0 SKIPPED, 0 FAILED
Checked under JIT, interpreter and JIT + hardening.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/3d6cbecbeb171117dccfe153306e479798fb608d.1574452833.git.daniel@iogearbox.net
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a test that benchmarks different ways of attaching BPF program to a kernel function.
Here are the results for 2.4Ghz x86 cpu on a kernel without mitigations:
$ ./test_progs -n 49 -v|grep events
task_rename base 2743K events per sec
task_rename kprobe 2419K events per sec
task_rename kretprobe 1876K events per sec
task_rename raw_tp 2578K events per sec
task_rename fentry 2710K events per sec
task_rename fexit 2685K events per sec
On a kernel with retpoline:
$ ./test_progs -n 49 -v|grep events
task_rename base 2401K events per sec
task_rename kprobe 1930K events per sec
task_rename kretprobe 1485K events per sec
task_rename raw_tp 2053K events per sec
task_rename fentry 2351K events per sec
task_rename fexit 2185K events per sec
All 5 approaches:
- kprobe/kretprobe in __set_task_comm()
- raw tracepoint in trace_task_rename()
- fentry/fexit in __set_task_comm()
are roughly equivalent.
__set_task_comm() by itself is quite fast, so any extra instructions add up.
Until BPF trampoline was introduced the fastest mechanism was raw tracepoint.
kprobe via ftrace was second best. kretprobe is slow due to trap. New
fentry/fexit methods via BPF trampoline are clearly the fastest and the
difference is more pronounced with retpoline on, since BPF trampoline doesn't
use indirect jumps.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20191122011515.255371-1-ast@kernel.org
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
test_core_reloc_kernel.c selftest is the only CO-RE test that reads and
returns for validation calling thread's information (pid, tgid, comm). Thus it
has to make sure that only test_prog's invocations are honored.
Fixes: df36e621418b ("selftests/bpf: add CO-RE relocs testing setup")
Reported-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20191121175900.3486133-1-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Initialized global variables are no different in ELF from static variables,
and don't require any extra support from libbpf. But they are matching
semantics of global data (backed by BPF maps) more closely, preventing
LLVM/Clang from aggressively inlining constant values and not requiring
volatile incantations to prevent those. This patch enables global variables.
It still disables uninitialized variables, which will be put into special COM
(common) ELF section, because BPF doesn't allow uninitialized data to be
accessed.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191121070743.1309473-5-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When relocating subprogram call, libbpf doesn't take into account
relo->text_off, which comes from symbol's value. This generally works fine for
subprograms implemented as static functions, but breaks for global functions.
Taking a simplified test_pkt_access.c as an example:
__attribute__ ((noinline))
static int test_pkt_access_subprog1(volatile struct __sk_buff *skb)
{
return skb->len * 2;
}
__attribute__ ((noinline))
static int test_pkt_access_subprog2(int val, volatile struct __sk_buff *skb)
{
return skb->len + val;
}
SEC("classifier/test_pkt_access")
int test_pkt_access(struct __sk_buff *skb)
{
if (test_pkt_access_subprog1(skb) != skb->len * 2)
return TC_ACT_SHOT;
if (test_pkt_access_subprog2(2, skb) != skb->len + 2)
return TC_ACT_SHOT;
return TC_ACT_UNSPEC;
}
When compiled, we get two relocations, pointing to '.text' symbol. .text has
st_value set to 0 (it points to the beginning of .text section):
0000000000000008 000000050000000a R_BPF_64_32 0000000000000000 .text
0000000000000040 000000050000000a R_BPF_64_32 0000000000000000 .text
test_pkt_access_subprog1 and test_pkt_access_subprog2 offsets (targets of two
calls) are encoded within call instruction's imm32 part as -1 and 2,
respectively:
0000000000000000 test_pkt_access_subprog1:
0: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0)
1: 64 00 00 00 01 00 00 00 w0 <<= 1
2: 95 00 00 00 00 00 00 00 exit
0000000000000018 test_pkt_access_subprog2:
3: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0)
4: 04 00 00 00 02 00 00 00 w0 += 2
5: 95 00 00 00 00 00 00 00 exit
0000000000000000 test_pkt_access:
0: bf 16 00 00 00 00 00 00 r6 = r1
===> 1: 85 10 00 00 ff ff ff ff call -1
2: bc 01 00 00 00 00 00 00 w1 = w0
3: b4 00 00 00 02 00 00 00 w0 = 2
4: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0)
5: 64 02 00 00 01 00 00 00 w2 <<= 1
6: 5e 21 08 00 00 00 00 00 if w1 != w2 goto +8 <LBB0_3>
7: bf 61 00 00 00 00 00 00 r1 = r6
===> 8: 85 10 00 00 02 00 00 00 call 2
9: bc 01 00 00 00 00 00 00 w1 = w0
10: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0)
11: 04 02 00 00 02 00 00 00 w2 += 2
12: b4 00 00 00 ff ff ff ff w0 = -1
13: 1e 21 01 00 00 00 00 00 if w1 == w2 goto +1 <LBB0_3>
14: b4 00 00 00 02 00 00 00 w0 = 2
0000000000000078 LBB0_3:
15: 95 00 00 00 00 00 00 00 exit
Now, if we compile example with global functions, the setup changes.
Relocations are now against specifically test_pkt_access_subprog1 and
test_pkt_access_subprog2 symbols, with test_pkt_access_subprog2 pointing 24
bytes into its respective section (.text), i.e., 3 instructions in:
0000000000000008 000000070000000a R_BPF_64_32 0000000000000000 test_pkt_access_subprog1
0000000000000048 000000080000000a R_BPF_64_32 0000000000000018 test_pkt_access_subprog2
Calls instructions now encode offsets relative to function symbols and are both
set ot -1:
0000000000000000 test_pkt_access_subprog1:
0: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0)
1: 64 00 00 00 01 00 00 00 w0 <<= 1
2: 95 00 00 00 00 00 00 00 exit
0000000000000018 test_pkt_access_subprog2:
3: 61 20 00 00 00 00 00 00 r0 = *(u32 *)(r2 + 0)
4: 0c 10 00 00 00 00 00 00 w0 += w1
5: 95 00 00 00 00 00 00 00 exit
0000000000000000 test_pkt_access:
0: bf 16 00 00 00 00 00 00 r6 = r1
===> 1: 85 10 00 00 ff ff ff ff call -1
2: bc 01 00 00 00 00 00 00 w1 = w0
3: b4 00 00 00 02 00 00 00 w0 = 2
4: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0)
5: 64 02 00 00 01 00 00 00 w2 <<= 1
6: 5e 21 09 00 00 00 00 00 if w1 != w2 goto +9 <LBB2_3>
7: b4 01 00 00 02 00 00 00 w1 = 2
8: bf 62 00 00 00 00 00 00 r2 = r6
===> 9: 85 10 00 00 ff ff ff ff call -1
10: bc 01 00 00 00 00 00 00 w1 = w0
11: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0)
12: 04 02 00 00 02 00 00 00 w2 += 2
13: b4 00 00 00 ff ff ff ff w0 = -1
14: 1e 21 01 00 00 00 00 00 if w1 == w2 goto +1 <LBB2_3>
15: b4 00 00 00 02 00 00 00 w0 = 2
0000000000000080 LBB2_3:
16: 95 00 00 00 00 00 00 00 exit
Thus the right formula to calculate target call offset after relocation should
take into account relocation's target symbol value (offset within section),
call instruction's imm32 offset, and (subtracting, to get relative instruction
offset) instruction index of call instruction itself. All that is shifted by
number of instructions in main program, given all sub-programs are copied over
after main program.
Convert few selftests relying on bpf-to-bpf calls to use global functions
instead of static ones.
Fixes: 48cca7e44f9f ("libbpf: add support for bpf_call")
Reported-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191119224447.3781271-1-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, with latest llvm trunk, selftest test_progs failed obj
file test_seg6_loop.o with the following error in verifier:
infinite loop detected at insn 76
The byte code sequence looks like below, and noted that alu32 has been
turned off by default for better generated codes in general:
48: w3 = 100
49: *(u32 *)(r10 - 68) = r3
...
; if (tlv.type == SR6_TLV_PADDING) {
76: if w3 == 5 goto -18 <LBB0_19>
...
85: r1 = *(u32 *)(r10 - 68)
; for (int i = 0; i < 100; i++) {
86: w1 += -1
87: if w1 == 0 goto +5 <LBB0_20>
88: *(u32 *)(r10 - 68) = r1
The main reason for verification failure is due to partial spills at
r10 - 68 for induction variable "i".
Current verifier only handles spills with 8-byte values. The above 4-byte
value spill to stack is treated to STACK_MISC and its content is not
saved. For the above example:
w3 = 100
R3_w=inv100 fp-64_w=inv1086626730498
*(u32 *)(r10 - 68) = r3
R3_w=inv100 fp-64_w=inv1086626730498
...
r1 = *(u32 *)(r10 - 68)
R1_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff))
fp-64=inv1086626730498
To resolve this issue, verifier needs to be extended to track sub-registers
in spilling, or llvm needs to enhanced to prevent sub-register spilling
in register allocation phase. The former will increase verifier complexity
and the latter will need some llvm "hacking".
Let us workaround this issue by declaring the induction variable as "long"
type so spilling will happen at non sub-register level. We can revisit this
later if sub-register spilling causes similar or other verification issues.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191117214036.1309510-1-yhs@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add selftests validating mmap()-ing BPF array maps: both single-element and
multi-element ones. Check that plain bpf_map_update_elem() and
bpf_map_lookup_elem() work correctly with memory-mapped array. Also convert
CO-RE relocation tests to use memory-mapped views of global data.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191117172806.2195367-6-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a test that attaches one FEXIT program to main sched_cls networking program
and two other FEXIT programs to subprograms. All three tracing programs
access return values and skb->len of networking program and subprograms.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191114185720.1641606-21-ast@kernel.org
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test_pkt_access.o is used by multiple tests. Fix its section name so that
program type can be automatically detected by libbpf and make it call other
subprograms with skb argument.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191114185720.1641606-20-ast@kernel.org
|
|
|
|
|
|
|
|
|
|
| |
Add fexit tests for BPF trampoline that checks kernel functions
with up to 6 arguments of different sizes and their return values.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191114185720.1641606-11-ast@kernel.org
|
|
|
|
|
|
|
|
|
|
| |
Add sanity test for BPF trampoline that checks kernel functions
with up to 6 arguments of different sizes.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191114185720.1641606-10-ast@kernel.org
|
|
|
|
|
|
|
|
|
|
| |
Add simple test for fentry and fexit programs around eth_type_trans.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191114185720.1641606-8-ast@kernel.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With latest llvm compiler, running test_progs will have the following
verifier failure for test_sysctl_loop1.o:
libbpf: load bpf program failed: Permission denied
libbpf: -- BEGIN DUMP LOG ---
libbpf:
invalid indirect read from stack var_off (0x0; 0xff)+196 size 7
...
libbpf: -- END LOG --
libbpf: failed to load program 'cgroup/sysctl'
libbpf: failed to load object 'test_sysctl_loop1.o'
The related bytecode looks as below:
0000000000000308 LBB0_8:
97: r4 = r10
98: r4 += -288
99: r4 += r7
100: w8 &= 255
101: r1 = r10
102: r1 += -488
103: r1 += r8
104: r2 = 7
105: r3 = 0
106: call 106
107: w1 = w0
108: w1 += -1
109: if w1 > 6 goto -24 <LBB0_5>
110: w0 += w8
111: r7 += 8
112: w8 = w0
113: if r7 != 224 goto -17 <LBB0_8>
And source code:
for (i = 0; i < ARRAY_SIZE(tcp_mem); ++i) {
ret = bpf_strtoul(value + off, MAX_ULONG_STR_LEN, 0,
tcp_mem + i);
if (ret <= 0 || ret > MAX_ULONG_STR_LEN)
return 0;
off += ret & MAX_ULONG_STR_LEN;
}
Current verifier is not able to conclude that register w0 before '+'
at insn 110 has a range of 1 to 7 and thinks it is from 0 - 255. This
leads to more conservative range for w8 at insn 112, and later verifier
complaint.
Let us workaround this issue until we found a compiler and/or verifier
solution. The workaround in this patch is to make variable 'ret' volatile,
which will force a reload and then '&' operation to ensure better value
range. With this patch, I got the below byte code for the loop:
0000000000000328 LBB0_9:
101: r4 = r10
102: r4 += -288
103: r4 += r7
104: w8 &= 255
105: r1 = r10
106: r1 += -488
107: r1 += r8
108: r2 = 7
109: r3 = 0
110: call 106
111: *(u32 *)(r10 - 64) = r0
112: r1 = *(u32 *)(r10 - 64)
113: if w1 s< 1 goto -28 <LBB0_5>
114: r1 = *(u32 *)(r10 - 64)
115: if w1 s> 7 goto -30 <LBB0_5>
116: r1 = *(u32 *)(r10 - 64)
117: w1 &= 7
118: w1 += w8
119: r7 += 8
120: w8 = w1
121: if r7 != 224 goto -21 <LBB0_9>
Insn 117 did the '&' operation and we got more precise value range
for 'w8' at insn 120. The test is happy then:
#3/17 test_sysctl_loop1.o:OK
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191107170045.2503480-1-yhs@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
This add tests for the different variations of automatic map unpinning on
load failure.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/157333184838.88376.8243704248624814775.stgit@toke.dk
|
|
|
|
|
|
|
|
| |
Access the skb->cb[] in the kfree_skb test.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191107180905.4097871-1-kafai@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Streamline BPF_CORE_READ_BITFIELD_PROBED interface to follow
BPF_CORE_READ_BITFIELD (direct) and BPF_CORE_READ, in general, i.e., just
return read result or 0, if underlying bpf_probe_read() failed.
In practice, real applications rarely check bpf_probe_read() result, because
it has to always work or otherwise it's a bug. So propagating internal
bpf_probe_read() error from this macro hurts usability without providing real
benefits in practice. This patch fixes the issue and simplifies usage,
noticeable even in selftest itself.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20191106201500.2582438-1-andriin@fb.com
|
|
|
|
|
|
|
|
|
| |
Add test verifying correctness and logic of field size relocation support in
libbpf.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191101222810.1246166-6-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
|
| |
Add a bunch of selftests verifying correctness of relocatable bitfield reading
support in libbpf. Both bpf_probe_read()-based and direct read-based bitfield
macros are tested. core_reloc.c "test_harness" is extended to support raw
tracepoint and new typed raw tracepoints as test BPF program types.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191101222810.1246166-5-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
| |
As libbpf is going to gain support for more field relocations, including field
size, some restrictions about exact size match are going to be lifted. Remove
test cases that explicitly test such failures.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191101222810.1246166-2-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tested on x86-64 and Ilya was also kind enough to give it a spin on
s390x, both passing with probe_user:OK there. The test is using the
newly added bpf_probe_read_user() to dump sockaddr from connect call
into .bss BPF map and overrides the user buffer via bpf_probe_write_user():
# ./test_progs
[...]
#17 pkt_md_access:OK
#18 probe_user:OK
#19 prog_run_xattr:OK
[...]
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/90f449d8af25354e05080e82fc6e2d3179da30ea.1572649915.git.daniel@iogearbox.net
|
|
|
|
|
|
|
|
|
|
| |
Use probe read *_{kernel,user}{,_str}() helpers instead of bpf_probe_read()
or bpf_probe_read_user_str() for program tests where appropriate.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/4a61d4b71ce3765587d8ef5cb93afa18515e5b3e.1572649915.git.daniel@iogearbox.net
|
|
|
|
|
|
|
|
|
|
| |
This adds a new BPF selftest to exercise the new automatic map pinning
code.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157269298209.394725.15420085139296213182.stgit@toke.dk
|
|
|
|
|
|
|
|
|
|
|
|
| |
reference_tracking is actually a set of 9 sub-tests. Make it explicitly so.
Also, add explicit "classifier/" prefix to BPF program section names to
let libbpf correctly guess program type. Thus, also remove explicit
bpf_prog__set_type() call.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191021033902.3856966-7-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
|
| |
test_queue_stack_map.h is used only from BPF programs. Thus it should be
part of progs/ subdir. An added benefit of moving it there is that new
TEST_RUNNER_DEFINE_RULES macro-rule will properly capture dependency on
this header for all BPF objects and trigger re-build, if it changes.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-7-andriin@fb.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
test_core_reloc_kernel test captures its own process name and validates
it as part of the test. Given extra "flavors" of test_progs, this break
for anything by default test_progs binary. Fix the test to cut out
flavor part of the process name.
Fixes: ee2eb063d330 ("selftests/bpf: Add BPF_CORE_READ and BPF_CORE_READ_STR_INTO macro tests")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-3-andriin@fb.com
|