summaryrefslogtreecommitdiffstats
path: root/tools
Commit message (Collapse)AuthorAgeFilesLines
* selftests/bpf: Set the default value of consumer_cnt as 0Hou Tao2023-06-1914-128/+35
| | | | | | | | | | | Considering that only bench_ringbufs.c supports consumer, just set the default value of consumer_cnt as 0. After that, update the validity check of consumer_cnt, remove unused consumer_thread code snippets and set consumer_cnt as 1 in run_bench_ringbufs.sh accordingly. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20230613080921.1623219-5-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* selftests/bpf: Ensure that next_cpu() returns a valid CPU numberHou Tao2023-06-192-1/+3
| | | | | | | | | | | | | | | | | | | When using option -a without --prod-affinity or --cons-affinity, if the number of producers and consumers is greater than the number of online CPUs, the benchmark will fail to run as shown below: $ getconf _NPROCESSORS_ONLN 8 $ ./bench bpf-loop -a -p9 Setting up benchmark 'bpf-loop'... setting affinity to CPU #8 failed: -22 Fix it by returning the remainder of next_cpu divided by the number of online CPUs in next_cpu(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20230613080921.1623219-4-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* selftests/bpf: Output the correct error code for pthread APIsHou Tao2023-06-191-4/+6
| | | | | | | | | The return value of pthread API is the error code when the called API fails, so output the return value instead of errno. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20230613080921.1623219-3-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* selftests/bpf: Use producer_cnt to allocate local counter arrayHou Tao2023-06-191-1/+1
| | | | | | | | | For count-local benchmark, use producer_cnt instead of consumer_cnt when allocating local counter array. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20230613080921.1623219-2-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpf: Centralize permissions checks for all BPF map typesAndrii Nakryiko2023-06-191-1/+5
| | | | | | | | | | | | | This allows to do more centralized decisions later on, and generally makes it very explicit which maps are privileged and which are not (e.g., LRU_HASH and LRU_PERCPU_HASH, which are privileged HASH variants, as opposed to unprivileged HASH and HASH_PERCPU; now this is explicit and easy to verify). Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20230613223533.3689589-4-andrii@kernel.org
* selftests/bpf: Verify that check_ids() is used for scalars in regsafe()Eduard Zingerman2023-06-131-0/+315
| | | | | | | | | | | | | | | | | | | | | | | | | Verify that the following example is rejected by verifier: r9 = ... some pointer with range X ... r6 = ... unbound scalar ID=a ... r7 = ... unbound scalar ID=b ... if (r6 > r7) goto +1 r7 = r6 if (r7 > X) goto exit r9 += r6 *(u64 *)r9 = Y Also add test cases to: - check that check_alu_op() for BPF_MOV instruction does not allocate scalar ID if source register is a constant; - check that unique scalar IDs are ignored when new verifier state is compared to cached verifier state; - check that two different scalar IDs in a verified state can't be mapped to the same scalar ID in current state. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230613153824.3324830-5-eddyz87@gmail.com
* selftests/bpf: Check if mark_chain_precision() follows scalar idsEduard Zingerman2023-06-132-0/+346
| | | | | | | | | | | | | | | | Check __mark_chain_precision() log to verify that scalars with same IDs are marked as precise. Use several scenarios to test that precision marks are propagated through: - registers of scalar type with the same ID within one state; - registers of scalar type with the same ID cross several states; - registers of scalar type with the same ID cross several stack frames; - stack slot of scalar type with the same ID; - multiple scalar IDs are tracked independently. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230613153824.3324830-3-eddyz87@gmail.com
* bpf: Use scalar ids in mark_chain_precision()Eduard Zingerman2023-06-131-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change mark_chain_precision() to track precision in situations like below: r2 = unknown value ... --- state #0 --- ... r1 = r2 // r1 and r2 now share the same ID ... --- state #1 {r1.id = A, r2.id = A} --- ... if (r2 > 10) goto exit; // find_equal_scalars() assigns range to r1 ... --- state #2 {r1.id = A, r2.id = A} --- r3 = r10 r3 += r1 // need to mark both r1 and r2 At the beginning of the processing of each state, ensure that if a register with a scalar ID is marked as precise, all registers sharing this ID are also marked as precise. This property would be used by a follow-up change in regsafe(). Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230613153824.3324830-2-eddyz87@gmail.com
* selftests/bpf: Update bpf_cpumask_any* tests to use bpf_cpumask_any_distribute*David Vernet2023-06-122-6/+6
| | | | | | | | | | | | | | | | In a prior patch, we removed the bpf_cpumask_any() and bpf_cpumask_any_and() kfuncs, and replaced them with bpf_cpumask_any_distribute() and bpf_cpumask_any_distribute_and(). The advertised semantics between the two kfuncs were identical, with the former always returning the first CPU, and the latter actually returning any CPU. This patch updates the selftests for these kfuncs to use the new names. Signed-off-by: David Vernet <void@manifault.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20230610035053.117605-4-void@manifault.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* selftests/bpf: Add test for new bpf_cpumask_first_and() kfuncDavid Vernet2023-06-123-0/+35
| | | | | | | | | | | A prior patch added a new kfunc called bpf_cpumask_first_and() which wraps cpumask_first_and(). This patch adds a selftest to validate its behavior. Signed-off-by: David Vernet <void@manifault.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20230610035053.117605-2-void@manifault.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* selftests/bpf: Fix invalid pointer check in get_xlated_program()Eduard Zingerman2023-06-121-11/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Dan Carpenter reported invalid check for calloc() result in test_verifier.c:get_xlated_program(): ./tools/testing/selftests/bpf/test_verifier.c:1365 get_xlated_program() warn: variable dereferenced before check 'buf' (see line 1364) ./tools/testing/selftests/bpf/test_verifier.c 1363 *cnt = xlated_prog_len / buf_element_size; 1364 *buf = calloc(*cnt, buf_element_size); 1365 if (!buf) { This should be if (!*buf) { 1366 perror("can't allocate xlated program buffer"); 1367 return -ENOMEM; This commit refactors the get_xlated_program() to avoid using double pointer type. Fixes: 933ff53191eb ("selftests/bpf: specify expected instructions in test_verifier tests") Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Closes: https://lore.kernel.org/bpf/ZH7u0hEGVB4MjGZq@moroto/ Link: https://lore.kernel.org/bpf/20230609221637.2631800-1-eddyz87@gmail.com
* selftests/bpf: Add missing prototypes for several test kfuncsJiri Olsa2023-06-082-8/+15
| | | | | | | | | | | | | | | | | | | Adding missing prototypes for several kfuncs that are used by test_verifier tests. We don't really need kfunc prototypes for these tests, but adding them to silence 'make W=1' build and to have all test kfuncs declarations in bpf_testmod_kfunc.h. Also moving __diag_pop for -Wmissing-prototypes to cover also bpf_testmod_test_write and bpf_testmod_test_read and adding bpf_fentry_shadow_test in there as well. All of them need to be exported, but there's no need for declarations. Fixes: 65eb006d85a2 ("bpf: Move kernel test kfuncs to bpf_testmod") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Closes: https://lore.kernel.org/oe-kbuild-all/202306051319.EihCQZPs-lkp@intel.com Link: https://lore.kernel.org/bpf/20230607224046.236510-1-jolsa@kernel.org
* selftests/bpf: Fix check_mtu using wrong variable typeJesper Dangaard Brouer2023-06-061-1/+1
| | | | | | | | | | | | | | | Dan Carpenter found via Smatch static checker, that unsigned 'mtu_lo' is never less than zero. Variable mtu_lo should have been an 'int', because read_mtu_device_lo() uses minus as error indications. Fixes: b62eba563229 ("selftests/bpf: Tests using bpf_check_mtu BPF-helper") Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/bpf/168605104733.3636467.17945947801753092590.stgit@firesoul
* selftests/bpf: Add missing selftests kconfig optionsDavid Vernet2023-06-051-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our selftests of course rely on the kernel being built with CONFIG_DEBUG_INFO_BTF=y, though this (nor its dependencies of CONFIG_DEBUG_INFO=y and CONFIG_DEBUG_INFO_DWARF4=y) are not specified. This causes the wrong kernel to be built, and selftests to similarly fail to build. Additionally, in the BPF selftests kconfig file, CONFIG_NF_CONNTRACK_MARK=y is specified, so that the 'u_int32_t mark' field will be present in the definition of struct nf_conn. While a dependency of CONFIG_NF_CONNTRACK_MARK=y, CONFIG_NETFILTER_ADVANCED=y, should be enabled by default, I've run into instances of CONFIG_NF_CONNTRACK_MARK not being set because CONFIG_NETFILTER_ADVANCED isn't set, and have to manually enable them with make menuconfig. Let's add these missing kconfig options to the file so that the necessary dependencies are in place to build vmlinux. Otherwise, we'll get errors like this when we try to compile selftests and generate vmlinux.h: $ cd /path/to/bpf-next $ make mrproper; make defconfig $ cat tools/testing/selftests/config >> .config $ make -j ... $ cd tools/testing/selftests/bpf $ make clean $ make -j ... LD [M] tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.ko tools/testing/selftests/bpf/tools/build/bpftool/bootstrap/bpftool btf dump file vmlinux format c > tools/testing/selftests/bpf/tools/build/bpftool/vmlinux.h libbpf: failed to find '.BTF' ELF section in vmlinux Error: failed to load BTF from bpf-next/vmlinux: No data available make[1]: *** [Makefile:208: tools/testing/selftests/bpf/tools/build/bpftool/vmlinux.h] Error 195 make[1]: *** Deleting file 'tools/testing/selftests/bpf/tools/build/bpftool/vmlinux.h' make: *** [Makefile:261: tools/testing/selftests/bpf/tools/sbin/bpftool] Error 2 Signed-off-by: David Vernet <void@manifault.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20230602140108.1177900-1-void@manifault.com
* tools/resolve_btfids: Fix setting HOSTCFLAGSViktor Malik2023-06-051-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Building BPF selftests with custom HOSTCFLAGS yields an error: # make HOSTCFLAGS="-O2" [...] HOSTCC ./tools/testing/selftests/bpf/tools/build/resolve_btfids/main.o main.c:73:10: fatal error: linux/rbtree.h: No such file or directory 73 | #include <linux/rbtree.h> | ^~~~~~~~~~~~~~~~ The reason is that tools/bpf/resolve_btfids/Makefile passes header include paths by extending HOSTCFLAGS which is overridden by setting HOSTCFLAGS in the make command (because of Makefile rules [1]). This patch fixes the above problem by passing the include paths via `HOSTCFLAGS_resolve_btfids` which is used by tools/build/Build.include and can be combined with overridding HOSTCFLAGS. [1] https://www.gnu.org/software/make/manual/html_node/Overriding.html Fixes: 56a2df7615fa ("tools/resolve_btfids: Compile resolve_btfids as host program") Signed-off-by: Viktor Malik <vmalik@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20230530123352.1308488-1-vmalik@redhat.com
* selftests/bpf: Add test for non-NULLable PTR_TO_BTF_IDsDavid Vernet2023-06-052-0/+25
| | | | | | | | | | | | | | | In a recent patch, we taught the verifier that trusted PTR_TO_BTF_ID can never be NULL. This prevents the verifier from incorrectly failing to load certain programs where it gets confused and thinks a reference isn't dropped because it incorrectly assumes that a branch exists in which a NULL PTR_TO_BTF_ID pointer is never released. This patch adds a testcase that verifies this cannot happen. Signed-off-by: David Vernet <void@manifault.com> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/r/20230602150112.1494194-2-void@manifault.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpf: Make bpf_refcount_acquire fallible for non-owning refsDave Marchevsky2023-06-052-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes an incorrect assumption made in the original bpf_refcount series [0], specifically that the BPF program calling bpf_refcount_acquire on some node can always guarantee that the node is alive. In that series, the patch adding failure behavior to rbtree_add and list_push_{front, back} breaks this assumption for non-owning references. Consider the following program: n = bpf_kptr_xchg(&mapval, NULL); /* skip error checking */ bpf_spin_lock(&l); if(bpf_rbtree_add(&t, &n->rb, less)) { bpf_refcount_acquire(n); /* Failed to add, do something else with the node */ } bpf_spin_unlock(&l); It's incorrect to assume that bpf_refcount_acquire will always succeed in this scenario. bpf_refcount_acquire is being called in a critical section here, but the lock being held is associated with rbtree t, which isn't necessarily the lock associated with the tree that the node is already in. So after bpf_rbtree_add fails to add the node and calls bpf_obj_drop in it, the program has no ownership of the node's lifetime. Therefore the node's refcount can be decr'd to 0 at any time after the failing rbtree_add. If this happens before the refcount_acquire above, the node might be free'd, and regardless refcount_acquire will be incrementing a 0 refcount. Later patches in the series exercise this scenario, resulting in the expected complaint from the kernel (without this patch's changes): refcount_t: addition on 0; use-after-free. WARNING: CPU: 1 PID: 207 at lib/refcount.c:25 refcount_warn_saturate+0xbc/0x110 Modules linked in: bpf_testmod(O) CPU: 1 PID: 207 Comm: test_progs Tainted: G O 6.3.0-rc7-02231-g723de1a718a2-dirty #371 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014 RIP: 0010:refcount_warn_saturate+0xbc/0x110 Code: 6f 64 f6 02 01 e8 84 a3 5c ff 0f 0b eb 9d 80 3d 5e 64 f6 02 00 75 94 48 c7 c7 e0 13 d2 82 c6 05 4e 64 f6 02 01 e8 64 a3 5c ff <0f> 0b e9 7a ff ff ff 80 3d 38 64 f6 02 00 0f 85 6d ff ff ff 48 c7 RSP: 0018:ffff88810b9179b0 EFLAGS: 00010082 RAX: 0000000000000000 RBX: 0000000000000002 RCX: 0000000000000000 RDX: 0000000000000202 RSI: 0000000000000008 RDI: ffffffff857c3680 RBP: ffff88810027d3c0 R08: ffffffff8125f2a4 R09: ffff88810b9176e7 R10: ffffed1021722edc R11: 746e756f63666572 R12: ffff88810027d388 R13: ffff88810027d3c0 R14: ffffc900005fe030 R15: ffffc900005fe048 FS: 00007fee0584a700(0000) GS:ffff88811b280000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00005634a96f6c58 CR3: 0000000108ce9002 CR4: 0000000000770ee0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: <TASK> bpf_refcount_acquire_impl+0xb5/0xc0 (rest of output snipped) The patch addresses this by changing bpf_refcount_acquire_impl to use refcount_inc_not_zero instead of refcount_inc and marking bpf_refcount_acquire KF_RET_NULL. For owning references, though, we know the above scenario is not possible and thus that bpf_refcount_acquire will always succeed. Some verifier bookkeeping is added to track "is input owning ref?" for bpf_refcount_acquire calls and return false from is_kfunc_ret_null for bpf_refcount_acquire on owning refs despite it being marked KF_RET_NULL. Existing selftests using bpf_refcount_acquire are modified where necessary to NULL-check its return value. [0]: https://lore.kernel.org/bpf/20230415201811.343116-1-davemarchevsky@fb.com/ Fixes: d2dcc67df910 ("bpf: Migrate bpf_rbtree_add and bpf_list_push_{front,back} to possibly fail") Reported-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Link: https://lore.kernel.org/r/20230602022647.1571784-5-davemarchevsky@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* selftests/bpf: Test table ID fib lookup BPF helperLouis DeLosSantos2023-06-011-8/+53
| | | | | | | | | | | | | | | | | | | | Add additional test cases to `fib_lookup.c` prog_test. These test cases add a new /24 network to the previously unused veth2 device, removes the directly connected route from the main routing table and moves it to table 100. The first test case then confirms a fib lookup for a remote address in this directly connected network, using the main routing table fails. The second test case ensures the same fib lookup using table 100 succeeds. An additional pair of tests which function in the same manner are added for IPv6. Signed-off-by: Louis DeLosSantos <louis.delos.devel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230505-bpf-add-tbid-fib-lookup-v2-2-0a31c22c748c@gmail.com
* bpf: Add table ID to bpf_fib_lookup BPF helperLouis DeLosSantos2023-06-011-3/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add ability to specify routing table ID to the `bpf_fib_lookup` BPF helper. A new field `tbid` is added to `struct bpf_fib_lookup` used as parameters to the `bpf_fib_lookup` BPF helper. When the helper is called with the `BPF_FIB_LOOKUP_DIRECT` and `BPF_FIB_LOOKUP_TBID` flags the `tbid` field in `struct bpf_fib_lookup` will be used as the table ID for the fib lookup. If the `tbid` does not exist the fib lookup will fail with `BPF_FIB_LKUP_RET_NOT_FWDED`. The `tbid` field becomes a union over the vlan related output fields in `struct bpf_fib_lookup` and will be zeroed immediately after usage. This functionality is useful in containerized environments. For instance, if a CNI wants to dictate the next-hop for traffic leaving a container it can create a container-specific routing table and perform a fib lookup against this table in a "host-net-namespace-side" TC program. This functionality also allows `ip rule` like functionality at the TC layer, allowing an eBPF program to pick a routing table based on some aspect of the sk_buff. As a concrete use case, this feature will be used in Cilium's SRv6 L3VPN datapath. When egress traffic leaves a Pod an eBPF program attached by Cilium will determine which VRF the egress traffic should target, and then perform a FIB lookup in a specific table representing this VRF's FIB. Signed-off-by: Louis DeLosSantos <louis.delos.devel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230505-bpf-add-tbid-fib-lookup-v2-1-0a31c22c748c@gmail.com
* selftests/bpf: Add a test where map key_type_id with decl_tag typeYonghong Song2023-05-301-0/+40
| | | | | | | | | | | Add two selftests where map creation key/value type_id's are decl_tags. Without previous patch, kernel warnings will appear similar to the one in the previous patch. With the previous patch, both kernel warnings are silenced. Signed-off-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20230530205034.266643-1-yhs@fb.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
* tools: ynl: Support enums in struct members in genetlink-legacyDonald Hunter2023-05-292-1/+7
| | | | | | | | Support decoding scalars as enums in struct members for genetlink-legacy specs. Signed-off-by: Donald Hunter <donald.hunter@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* tools: ynl: Initialise fixed headers to 0 in genetlink-legacyDonald Hunter2023-05-291-1/+1
| | | | | | | | | This eliminates the need for e.g. --json '{"dp-ifindex":0}' which is not too big a deal for ovs but will get tiresome for fixed header structs that have many members. Signed-off-by: Donald Hunter <donald.hunter@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* Merge tag 'for-netdev' of ↵Jakub Kicinski2023-05-2661-690/+2282
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Daniel Borkmann says: ==================== pull-request: bpf-next 2023-05-26 We've added 54 non-merge commits during the last 10 day(s) which contain a total of 76 files changed, 2729 insertions(+), 1003 deletions(-). The main changes are: 1) Add the capability to destroy sockets in BPF through a new kfunc, from Aditi Ghag. 2) Support O_PATH fds in BPF_OBJ_PIN and BPF_OBJ_GET commands, from Andrii Nakryiko. 3) Add capability for libbpf to resize datasec maps when backed via mmap, from JP Kobryn. 4) Move all the test kfuncs for CI out of the kernel and into bpf_testmod, from Jiri Olsa. 5) Big batch of xsk selftest improvements to prep for multi-buffer testing, from Magnus Karlsson. 6) Show the target_{obj,btf}_id in tracing link's fdinfo and dump it via bpftool, from Yafang Shao. 7) Various misc BPF selftest improvements to work with upcoming LLVM 17, from Yonghong Song. 8) Extend bpftool to specify netdevice for resolving XDP hints, from Larysa Zaremba. 9) Document masking in shift operations for the insn set document, from Dave Thaler. 10) Extend BPF selftests to check xdp_feature support for bond driver, from Lorenzo Bianconi. * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (54 commits) bpf: Fix bad unlock balance on freeze_mutex libbpf: Ensure FD >= 3 during bpf_map__reuse_fd() libbpf: Ensure libbpf always opens files with O_CLOEXEC selftests/bpf: Check whether to run selftest libbpf: Change var type in datasec resize func bpf: drop unnecessary bpf_capable() check in BPF_MAP_FREEZE command libbpf: Selftests for resizing datasec maps libbpf: Add capability for resizing datasec maps selftests/bpf: Add path_fd-based BPF_OBJ_PIN and BPF_OBJ_GET tests libbpf: Add opts-based bpf_obj_pin() API and add support for path_fd bpf: Support O_PATH FDs in BPF_OBJ_PIN and BPF_OBJ_GET commands libbpf: Start v1.3 development cycle bpf: Validate BPF object in BPF_OBJ_PIN before calling LSM bpftool: Specify XDP Hints ifname when loading program selftests/bpf: Add xdp_feature selftest for bond device selftests/bpf: Test bpf_sock_destroy selftests/bpf: Add helper to get port using getsockname bpf: Add bpf_sock_destroy kfunc bpf: Add kfunc filter function to 'struct btf_kfunc_id_set' bpf: udp: Implement batching for sockets iterator ... ==================== Link: https://lore.kernel.org/r/20230526222747.17775-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
| * libbpf: Ensure FD >= 3 during bpf_map__reuse_fd()Andrii Nakryiko2023-05-261-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | Improve bpf_map__reuse_fd() logic and ensure that dup'ed map FD is "good" (>= 3) and has O_CLOEXEC flags. Use fcntl(F_DUPFD_CLOEXEC) for that, similarly to ensure_good_fd() helper we already use in low-level APIs that work with bpf() syscall. Suggested-by: Lennart Poettering <lennart@poettering.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230525221311.2136408-2-andrii@kernel.org
| * libbpf: Ensure libbpf always opens files with O_CLOEXECAndrii Nakryiko2023-05-264-8/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Make sure that libbpf code always gets FD with O_CLOEXEC flag set, regardless if file is open through open() or fopen(). For the latter this means to add "e" to mode string, which is supported since pretty ancient glibc v2.7. Also drop the outdated TODO comment in usdt.c, which was already completed. Suggested-by: Lennart Poettering <lennart@poettering.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230525221311.2136408-1-andrii@kernel.org
| * selftests/bpf: Check whether to run selftestDaniel Müller2023-05-251-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | The sockopt test invokes test__start_subtest and then unconditionally asserts the success. That means that even if deny-listed, any test will still run and potentially fail. Evaluate the return value of test__start_subtest() to achieve the desired behavior, as other tests do. Signed-off-by: Daniel Müller <deso@posteo.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230525232248.640465-1-deso@posteo.net
| * libbpf: Change var type in datasec resize funcJP Kobryn2023-05-251-2/+2
| | | | | | | | | | | | | | | | | | | | This changes a local variable type that stores a new array id to match the return type of btf__add_array(). Signed-off-by: JP Kobryn <inwardvessel@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20230525001323.8554-1-inwardvessel@gmail.com
| * libbpf: Selftests for resizing datasec mapsJP Kobryn2023-05-242-0/+285
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds test coverage for resizing datasec maps. The first two subtests resize the bss and custom data sections. In both cases, an initial array (of length one) has its element set to one. After resizing the rest of the array is filled with ones as well. A BPF program is then run to sum the respective arrays and back on the userspace side the sum is checked to be equal to the number of elements. The third subtest attempts to perform resizing under conditions that will result in either the resize failing or the BTF info being cleared. Signed-off-by: JP Kobryn <inwardvessel@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20230524004537.18614-3-inwardvessel@gmail.com
| * libbpf: Add capability for resizing datasec mapsJP Kobryn2023-05-242-11/+142
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch updates bpf_map__set_value_size() so that if the given map is memory mapped, it will attempt to resize the mapped region. Initial contents of the mapped region are preserved. BTF is not required, but after the mapping is resized an attempt is made to adjust the associated BTF information if the following criteria is met: - BTF info is present - the map is a datasec - the final variable in the datasec is an array ... the resulting BTF info will be updated so that the final array variable is associated with a new BTF array type sized to cover the requested size. Note that the initial resizing of the memory mapped region can succeed while the subsequent BTF adjustment can fail. In this case, BTF info is dropped from the map by clearing the key and value type. Signed-off-by: JP Kobryn <inwardvessel@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20230524004537.18614-2-inwardvessel@gmail.com
| * selftests/bpf: Add path_fd-based BPF_OBJ_PIN and BPF_OBJ_GET testsAndrii Nakryiko2023-05-231-0/+268
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a selftest demonstrating using detach-mounted BPF FS using new mount APIs, and pinning and getting BPF map using such mount. This demonstrates how something like container manager could setup BPF FS, pin and adjust all the necessary objects in it, all before exposing BPF FS to a particular mount namespace. Also add a few subtests validating all meaningful combinations of path_fd and pathname. We use mounted /sys/fs/bpf location for these. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230523170013.728457-5-andrii@kernel.org
| * libbpf: Add opts-based bpf_obj_pin() API and add support for path_fdAndrii Nakryiko2023-05-233-5/+32
| | | | | | | | | | | | | | | | | | | | Add path_fd support for bpf_obj_pin() and bpf_obj_get() operations (through their opts-based variants). This allows to take advantage of new kernel-side support for O_PATH-based pin/get location specification. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230523170013.728457-4-andrii@kernel.org
| * bpf: Support O_PATH FDs in BPF_OBJ_PIN and BPF_OBJ_GET commandsAndrii Nakryiko2023-05-231-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current UAPI of BPF_OBJ_PIN and BPF_OBJ_GET commands of bpf() syscall forces users to specify pinning location as a string-based absolute or relative (to current working directory) path. This has various implications related to security (e.g., symlink-based attacks), forces BPF FS to be exposed in the file system, which can cause races with other applications. One of the feedbacks we got from folks working with containers heavily was that inability to use purely FD-based location specification was an unfortunate limitation and hindrance for BPF_OBJ_PIN and BPF_OBJ_GET commands. This patch closes this oversight, adding path_fd field to BPF_OBJ_PIN and BPF_OBJ_GET UAPI, following conventions established by *at() syscalls for dirfd + pathname combinations. This now allows interesting possibilities like working with detached BPF FS mount (e.g., to perform multiple pinnings without running a risk of someone interfering with them), and generally making pinning/getting more secure and not prone to any races and/or security attacks. This is demonstrated by a selftest added in subsequent patch that takes advantage of new mount APIs (fsopen, fsconfig, fsmount) to demonstrate creating detached BPF FS mount, pinning, and then getting BPF map out of it, all while never exposing this private instance of BPF FS to outside worlds. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Christian Brauner <brauner@kernel.org> Link: https://lore.kernel.org/bpf/20230523170013.728457-4-andrii@kernel.org
| * libbpf: Start v1.3 development cycleAndrii Nakryiko2023-05-232-1/+4
| | | | | | | | | | | | | | | | Bump libbpf.map to v1.3.0 to start a new libbpf version cycle. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230523170013.728457-3-andrii@kernel.org
| * bpftool: Specify XDP Hints ifname when loading programLarysa Zaremba2023-05-235-20/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add ability to specify a network interface used to resolve XDP hints kfuncs when loading program through bpftool. Usage: bpftool prog load [...] xdpmeta_dev <ifname> Writing just 'dev <ifname>' instead of 'xdpmeta_dev' is a very probable mistake that results in not very descriptive errors, so 'bpftool prog load [...] dev <ifname>' syntax becomes deprecated, followed by 'bpftool map create [...] dev <ifname>' for consistency. Now, to offload program, execute: bpftool prog load [...] offload_dev <ifname> To offload map: bpftool map create [...] offload_dev <ifname> 'dev <ifname>' still performs offloading in the commands above, but now triggers a warning and is excluded from bash completion. 'xdpmeta_dev' and 'offload_dev' are mutually exclusive options, because 'xdpmeta_dev' basically makes a program device-bound without loading it onto the said device. For now, offloaded programs cannot use XDP hints [0], but if this changes, using 'offload_dev <ifname>' should cover this case. [0] https://lore.kernel.org/bpf/a5a636cc-5b03-686f-4be0-000383b05cfc@linux.dev Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20230517160103.1088185-1-larysa.zaremba@intel.com
| * selftests/bpf: Add xdp_feature selftest for bond deviceLorenzo Bianconi2023-05-231-0/+121
| | | | | | | | | | | | | | | | | | Introduce selftests to check xdp_feature support for bond driver. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Jussi Maki <joamaki@gmail.com> Link: https://lore.kernel.org/bpf/64cb8f20e6491f5b971f8d3129335093c359aad7.1684329998.git.lorenzo@kernel.org
| * selftests/bpf: Test bpf_sock_destroyAditi Ghag2023-05-193-0/+388
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The test cases for destroying sockets mirror the intended usages of the bpf_sock_destroy kfunc using iterators. The destroy helpers set `ECONNABORTED` error code that we can validate in the test code with client sockets. But UDP sockets have an overriding error code from `disconnect()` called during abort, so the error code validation is only done for TCP sockets. The failure test cases validate that the `bpf_sock_destroy` kfunc is not allowed from program attach types other than BPF trace iterator, and such programs fail to load. Signed-off-by: Aditi Ghag <aditi.ghag@isovalent.com> Link: https://lore.kernel.org/r/20230519225157.760788-10-aditi.ghag@isovalent.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
| * selftests/bpf: Add helper to get port using getsocknameAditi Ghag2023-05-192-0/+24
| | | | | | | | | | | | | | | | | | | | The helper will be used to programmatically retrieve and pass ports in userspace and kernel selftest programs. Suggested-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Aditi Ghag <aditi.ghag@isovalent.com> Link: https://lore.kernel.org/r/20230519225157.760788-9-aditi.ghag@isovalent.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
| * bpftool: Show target_{obj,btf}_id in tracing link infoYafang Shao2023-05-191-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The target_btf_id can help us understand which kernel function is linked by a tracing prog. The target_btf_id and target_obj_id have already been exposed to userspace, so we just need to show them. The result as follows, $ tools/bpf/bpftool/bpftool link show 2: tracing prog 13 prog_type tracing attach_type trace_fentry target_obj_id 1 target_btf_id 13964 pids trace(10673) $ tools/bpf/bpftool/bpftool link show -j [{"id":2,"type":"tracing","prog_id":13,"prog_type":"tracing","attach_type":"trace_fentry","target_obj_id":1,"target_btf_id":13964,"pids":[{"pid":10673,"comm":"trace"}]}] Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20230517103126.68372-3-laoar.shao@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * selftests/bpf: Make bpf_dynptr_is_rdonly() prototyype consistent with kernelYonghong Song2023-05-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently kernel kfunc bpf_dynptr_is_rdonly() has prototype ... __bpf_kfunc bool bpf_dynptr_is_rdonly(struct bpf_dynptr_kern *ptr) ... while selftests bpf_kfuncs.h has: extern int bpf_dynptr_is_rdonly(const struct bpf_dynptr *ptr) __ksym; Such a mismatch might cause problems although currently it is okay in selftests. Fix it to prevent future potential surprise. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230517040409.4024618-1-yhs@fb.com
| * selftests/bpf: Fix dynptr/test_dynptr_is_nullYonghong Song2023-05-174-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With latest llvm17, dynptr/test_dynptr_is_null subtest failed in my testing VM. The failure log looks like below: All error logs: tester_init:PASS:tester_log_buf 0 nsec process_subtest:PASS:obj_open_mem 0 nsec process_subtest:PASS:Can't alloc specs array 0 nsec verify_success:PASS:dynptr_success__open 0 nsec verify_success:PASS:bpf_object__find_program_by_name 0 nsec verify_success:PASS:dynptr_success__load 0 nsec verify_success:PASS:bpf_program__attach 0 nsec verify_success:FAIL:err unexpected err: actual 4 != expected 0 #65/9 dynptr/test_dynptr_is_null:FAIL The error happens for bpf prog test_dynptr_is_null in dynptr_success.c: if (bpf_dynptr_is_null(&ptr2)) { err = 4; goto exit; } The bpf_dynptr_is_null(&ptr) unexpectedly returned a non-zero value and the control went to the error path. Digging further, I found the root cause is due to function signature difference between kernel and user space. In kernel, we have ... __bpf_kfunc bool bpf_dynptr_is_null(struct bpf_dynptr_kern *ptr) ... while in bpf_kfuncs.h we have: extern int bpf_dynptr_is_null(const struct bpf_dynptr *ptr) __ksym; The kernel bpf_dynptr_is_null disasm code: ffffffff812f1a90 <bpf_dynptr_is_null>: ffffffff812f1a90: f3 0f 1e fa endbr64 ffffffff812f1a94: 0f 1f 44 00 00 nopl (%rax,%rax) ffffffff812f1a99: 53 pushq %rbx ffffffff812f1a9a: 48 89 fb movq %rdi, %rbx ffffffff812f1a9d: e8 ae 29 17 00 callq 0xffffffff81464450 <__asan_load8_noabort> ffffffff812f1aa2: 48 83 3b 00 cmpq $0x0, (%rbx) ffffffff812f1aa6: 0f 94 c0 sete %al ffffffff812f1aa9: 5b popq %rbx ffffffff812f1aaa: c3 retq Note that only 1-byte register %al is set and the other 7-bytes are not touched. In bpf program, the asm code for the above bpf_dynptr_is_null(&ptr2): 266: 85 10 00 00 ff ff ff ff call -0x1 267: b4 01 00 00 04 00 00 00 w1 = 0x4 268: 16 00 03 00 00 00 00 00 if w0 == 0x0 goto +0x3 <LBB9_8> Basically, 4-byte subregister is tested. This might cause error as the value other than the lowest byte might not be 0. This patch fixed the issue by using the identical func prototype across kernel and selftest user space. The fixed bpf asm code: 267: 85 10 00 00 ff ff ff ff call -0x1 268: 54 00 00 00 01 00 00 00 w0 &= 0x1 269: b4 01 00 00 04 00 00 00 w1 = 0x4 270: 16 00 03 00 00 00 00 00 if w0 == 0x0 goto +0x3 <LBB9_8> Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230517040404.4023912-1-yhs@fb.com
| * bpftool: Support bpffs mountpoint as pin path for prog loadallPengcheng Yang2023-05-175-7/+10
| | | | | | | | | | | | | | | | | | | | | | | | Currently, when using prog loadall and the pin path is a bpffs mountpoint, bpffs will be repeatedly mounted to the parent directory of the bpffs mountpoint path. For example, a `bpftool prog loadall test.o /sys/fs/bpf` will trigger this. Signed-off-by: Pengcheng Yang <yangpc@wangsu.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/1683342439-3677-1-git-send-email-yangpc@wangsu.com
| * selftests/bpf: Do not use sign-file as testcaseAlexey Gladkov2023-05-171-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sign-file utility (from scripts/) is used in prog_tests/verify_pkcs7_sig.c, but the utility should not be called as a test. Executing this utility produces the following error: selftests: /linux/tools/testing/selftests/bpf: urandom_read ok 16 selftests: /linux/tools/testing/selftests/bpf: urandom_read selftests: /linux/tools/testing/selftests/bpf: sign-file not ok 17 selftests: /linux/tools/testing/selftests/bpf: sign-file # exit=2 Also, urandom_read is mistakenly used as a test. It does not lead to an error, but should be moved over to TEST_GEN_FILES as well. The empty TEST_CUSTOM_PROGS can then be removed. Fixes: fc97590668ae ("selftests/bpf: Add test for bpf_verify_pkcs7_signature() kfunc") Signed-off-by: Alexey Gladkov <legion@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Roberto Sassu <roberto.sassu@huawei.com> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/ZEuWFk3QyML9y5QQ@example.org Link: https://lore.kernel.org/bpf/88e3ab23029d726a2703adcf6af8356f7a2d3483.1684316821.git.legion@kernel.org
| * selftests/xsk: adjust packet pacing for multi-buffer supportMagnus Karlsson2023-05-162-20/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Modify the packet pacing algorithm so that it works with multi-buffer packets. This algorithm makes sure we do not send too many buffers to the receiving thread so that packets have to be dropped. The previous algorithm made the assumption that each packet only consumes one buffer, but that is not true anymore when multi-buffer support gets added. Instead, we find out what the largest packet size is in the packet stream and assume that each packet will consume this many buffers. This is conservative and overly cautious as there might be smaller packets in the stream that need fewer buffers per packet. But it keeps the algorithm simple. Also simplify it by removing the pthread conditional and just test if there is enough space in the Rx thread before trying to send one more batch. Also makes the tests run faster. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230516103109.3066-11-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * selftests/xsk: generate data for multi-buffer packetsMagnus Karlsson2023-05-161-27/+43
| | | | | | | | | | | | | | | | | | | | | | | | Add the ability to generate data in the packets that are correct for multi-buffer packets. The ethernet header should only go into the first fragment followed by data and the others should only have data. We also need to modify the pkt_dump function so that it knows what fragment has an ethernet header so it can print this. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230516103109.3066-10-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * selftests/xsk: populate fill ring based on frags neededMagnus Karlsson2023-05-162-12/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Populate the fill ring based on the number of frags a packet needs. With multi-buffer support, a packet might require more than a single fragment/buffer, so the function xsk_populate_fill_ring() needs to consider how many buffers a packet will consume, and put that many buffers on the fill ring for each packet it should receive. As we are still not sending any multi-buffer packets, the function will only produce one buffer per packet at the moment. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230516103109.3066-9-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * selftests/xsx: test for huge pages only onceMagnus Karlsson2023-05-162-94/+94
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Test for hugepages only once at the beginning of the execution of the whole test suite, instead of before each test that needs huge pages. These are the tests that use unaligned mode. As more unaligned tests will be added, so the current system just does not scale. With this change, there are now three possible outcomes of a test run: fail, pass, or skip. To simplify the handling of this, the function testapp_validate_traffic() now returns this value to the main loop. As this function is used by nearly all tests, it meant a small change to most of them. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230516103109.3066-8-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * selftests/xsk: store offset in pkt instead of addrMagnus Karlsson2023-05-162-64/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Store the offset in struct pkt instead of the address. This is important since address is only meaningful in the context of a packet that is stored in a single umem buffer and thus a single Tx descriptor. If the packet, in contrast need to be represented by multiple buffers in the umem, storing the address makes no sense since the packet will consist of multiple buffers in the umem at various addresses. This change is in preparation for the upcoming multi-buffer support in AF_XDP and the corresponding tests. So instead of indicating the address, we instead indicate the offset of the packet in the first buffer. The actual address of the buffer is allocated from the umem with a new function called umem_alloc_buffer(). This also means we can get rid of the use_fill_for_addr flag as the addresses fed into the fill ring will always be the offset from the pkt specification in the packet stream plus the address of the allocated buffer from the umem. No special casing needed. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230516103109.3066-7-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * selftests/xsk: add packet iterator for tx to packet streamMagnus Karlsson2023-05-162-21/+24
| | | | | | | | | | | | | | | | | | | | Convert the current variable rx_pkt_nb to an iterator that can be used for both Rx and Tx. This to simplify the code and making Tx more like Rx that already has this feature. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230516103109.3066-6-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * selftests/xsk: dump packet at errorMagnus Karlsson2023-05-163-22/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | Dump the content of the packet when a test finds that packets are received out of order, the length is wrong, or some other packet error. Use the already existing pkt_dump function for this and call it when the above errors are detected. Get rid of the command line option for dumping packets as it is not useful to print out thousands of good packets followed by the faulty one you would like to see. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230516103109.3066-5-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * selftests/xsk: add varying payload pattern within packetMagnus Karlsson2023-05-162-24/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a varying payload pattern within the packet. Instead of having just a packet number that is the same for all words in a packet, make each word different in the packet. The upper 16-bits are set to the packet number and the lower 16-bits are the sequence number of the words in this packet. So the 3rd packet's 5th 32-bit word of data will contain the number (2<<32) | 4 as they are numbered from 0. This will make it easier to detect fragments that are out of order when starting to test multi-buffer support. The member payload in the packet is renamed pkt_nb to reflect that it is now only a pkt_nb, not the real payload as seen above. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230516103109.3066-4-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>