summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* bpf: Document BPF_PROG_TYPE_CGROUP_SYSCTLAndrey Ignatov2019-04-182-0/+134
| | | | | | | | | | | | | | | | | | | | | | | | Add documentation for BPF_PROG_TYPE_CGROUP_SYSCTL, including general info, attach type, context, return code, helpers, example and usage considerations. A separate file prog_cgroup_sysctl.rst is added to Documentation/bpf/. In the future more program types can be documented in their own prog_<name>.rst files. Another way to place program type specific documentation would be to group program types somehow (e.g. cgroup.rst for all cgroup-bpf programs), but it may not scale well since some program types may belong to different groups, e.g. BPF_PROG_TYPE_CGROUP_SKB can be documented together with either cgroup-bpf programs or programs that access skb. The new file is added to the index and verified by `make htmldocs` / sanity-check by lynx. Signed-off-by: Andrey Ignatov <rdna@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* selftests/bpf: fix a compilation errorYonghong Song2019-04-171-2/+2
| | | | | | | | | | | | | | | | I hit the following compilation error with gcc 4.8.5. prog_tests/flow_dissector.c: In function ‘test_flow_dissector’: prog_tests/flow_dissector.c:155:2: error: ‘for’ loop initial declarations are only allowed in C99 mode for (int i = 0; i < ARRAY_SIZE(tests); i++) { ^ prog_tests/flow_dissector.c:155:2: note: use option -std=c99 or -std=gnu99 to compile your code Let us fix the issue by avoiding this particular c99 feature. Fixes: a5cb33464e53 ("selftests/bpf: make flow dissector tests more extensible") Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* Merge branch 'bulk-cpumap-redirect'Alexei Starovoitov2019-04-173-35/+91
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jesper Dangaard Brouer says: ==================== This patchset utilize a number of different kernel bulk APIs for optimizing the performance for the XDP cpumap redirect feature. Benchmark details are available here: https://github.com/xdp-project/xdp-project/blob/master/areas/cpumap/cpumap03-optimizations.org Performance measurements can be considered micro benchmarks, as they measure dropping packets at different stages in the network stack. Summary based on above: Baseline benchmarks - baseline-redirect: UdpNoPorts: 3,180,074 - baseline-redirect: iptables-raw drop: 6,193,534 Patch1: bpf: cpumap use ptr_ring_consume_batched - redirect: UdpNoPorts: 3,327,729 - redirect: iptables-raw drop: 6,321,540 Patch2: net: core: introduce build_skb_around - redirect: UdpNoPorts: 3,221,303 - redirect: iptables-raw drop: 6,320,066 Patch3: bpf: cpumap do bulk allocation of SKBs - redirect: UdpNoPorts: 3,290,563 - redirect: iptables-raw drop: 6,650,112 Patch4: bpf: cpumap memory prefetchw optimizations for struct page - redirect: UdpNoPorts: 3,520,250 - redirect: iptables-raw drop: 7,649,604 In this V2 submission I have chosen drop the SKB-list patch using netif_receive_skb_list() as it was not showing a performance improvement for these micro benchmarks. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * bpf: cpumap memory prefetchw optimizations for struct pageJesper Dangaard Brouer2019-04-171-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A lot of the performance gain comes from this patch. While analysing performance overhead it was found that the largest CPU stalls were caused when touching the struct page area. It is first read with a READ_ONCE from build_skb_around via page_is_pfmemalloc(), and when freed written by page_frag_free() call. Measurements show that the prefetchw (W) variant operation is needed to achieve the performance gain. We believe this optimization it two fold, first the W-variant saves one step in the cache-coherency protocol, and second it helps us to avoid the non-temporal prefetch HW optimizations and bring this into all cache-levels. It might be worth investigating if prefetch into L2 will have the same benefit. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * bpf: cpumap do bulk allocation of SKBsJesper Dangaard Brouer2019-04-171-7/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As cpumap now batch consume xdp_frame's from the ptr_ring, it knows how many SKBs it need to allocate. Thus, lets bulk allocate these SKBs via kmem_cache_alloc_bulk() API, and use the previously introduced function build_skb_around(). Notice that the flag __GFP_ZERO asks the slab/slub allocator to clear the memory for us. This does clear a larger area than needed, but my micro benchmarks on Intel CPUs show that this is slightly faster due to being a cacheline aligned area is cleared for the SKBs. (For SLUB allocator, there is a future optimization potential, because SKBs will with high probability originate from same page. If we can find/identify continuous memory areas then the Intel CPU memset rep stos will have a real performance gain.) Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * net: core: introduce build_skb_aroundJesper Dangaard Brouer2019-04-172-19/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The function build_skb() also have the responsibility to allocate and clear the SKB structure. Introduce a new function build_skb_around(), that moves the responsibility of allocation and clearing to the caller. This allows caller to use kmem_cache (slab/slub) bulk allocation API. Next patch use this function combined with kmem_cache_alloc_bulk. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * bpf: cpumap use ptr_ring_consume_batchedJesper Dangaard Brouer2019-04-171-10/+11
|/ | | | | | | | | | | | | | | | | | | Move ptr_ring dequeue outside loop, that allocate SKBs and calls network stack, as these operations that can take some time. The ptr_ring is a communication channel between CPUs, where we want to reduce/limit any cacheline bouncing. Do a concentrated bulk dequeue via ptr_ring_consume_batched, to shorten the period and times the remote cacheline in ptr_ring is read Batch size 8 is both to (1) limit BH-disable period, and (2) consume one cacheline on 64-bit archs. After reducing the BH-disable section further then we can consider changing this, while still thinking about L1 cacheline size being active. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* Merge branch 'af_xdp-smp_mb-fixes'Alexei Starovoitov2019-04-163-10/+98
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Magnus Karlsson says: ==================== This patch set fixes one bug and removes two dependencies on Linux kernel headers from the XDP socket code in libbpf. A number of people have pointed out that these two dependencies make it hard to build the XDP socket part of libbpf without any kernel header dependencies. The two removed dependecies are: * Remove the usage of likely and unlikely (compiler.h) in xsk.h. It has been reported that the use of these actually decreases the performance of the ring access code due to an increase in instruction cache misses, so let us just remove these. * Remove the dependency on barrier.h as it brings in a lot of kernel headers. As the XDP socket code only uses two simple functions from it, we can reimplement these. As a bonus, the new implementation is faster as it uses the same barrier primitives as the kernel does when the same code is compiled there. Without this patch, the user land code uses lfence and sfence on x86, which are unnecessarily harsh/thorough. In the process of removing these dependencies a missing barrier function for at least PPC64 was discovered. For a full explanation on the missing barrier, please refer to patch 1. So the patch set now starts with two patches fixing this. I have also added a patch at the end removing this full memory barrier for x86 only, as it is not needed there. Structure of the patch set: Patch 1-2: Adds the missing barrier function in kernel and user space. Patch 3-4: Removes the dependencies Patch 5: Optimizes the added barrier from patch 2 so that it does not do unnecessary work on x86. v2 -> v3: * Added missing memory barrier in ring code * Added an explanation on the three barriers we use in the code * Moved barrier functions from xsk.h to libbpf_util.h * Added comment on why we have these functions in libbpf_util.h * Added a new barrier function in user space that makes it possible to remove the full memory barrier on x86. v1 -> v2: * Added comment about validity of ARM 32-bit barriers. Only armv7 and above. /Magnus ==================== Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * libbpf: optimize barrier for XDP socket ringsMagnus Karlsson2019-04-162-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The full memory barrier in the XDP socket rings on the consumer side between the load of the data and the store of the consumer ring is there to protect the store from being executed before the load of the data. If this was allowed to happen, the producer might overwrite the data field with a new entry before the consumer got the chance to read it. On x86, stores are guaranteed not to be reordered with older loads, so it does not need a full memory barrier here. A compile time barrier would be enough. This patch introdcues a new primitive in libbpf_util.h that implements a new barrier type (libbpf_smp_rwmb) hindering stores to be reordered with older loads. It is then used in the XDP socket ring access code in libbpf to improve performance. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * libbpf: remove dependency on barrier.h in xsk.hMagnus Karlsson2019-04-162-3/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The use of smp_rmb() and smp_wmb() creates a Linux header dependency on barrier.h that is unnecessary in most parts. This patch implements the two small defines that are needed from barrier.h. As a bonus, the new implementations are faster than the default ones as they default to sfence and lfence for x86, while we only need a compiler barrier in our case. Just as it is when the same ring access code is compiled in the kernel. Fixes: 1cad07884239 ("libbpf: add support for using AF_XDP sockets") Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * libbpf: remove likely/unlikely in xsk.hMagnus Karlsson2019-04-161-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes the use of likely and unlikely in xsk.h since they create a dependency on Linux headers as reported by several users. There have also been reports that the use of these decreases performance as the compiler puts the code on two different cache lines instead of on a single one. All in all, I think we are better off without them. Fixes: 1cad07884239 ("libbpf: add support for using AF_XDP sockets") Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * libbpf: fix XDP socket ring buffer memory orderingMagnus Karlsson2019-04-161-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ring buffer code of XDP sockets is missing a memory barrier on the consumer side between the load of the data and the write that signals that it is ok for the producer to put new data into the buffer. On architectures that does not guarantee that stores are not reordered with older loads, the producer might put data into the ring before the consumer had the chance to read it. As IA does guarantee this ordering, it would only need a compiler barrier here, but there are no primitives in barrier.h for this specific case (hinder writes to be ordered before older reads) so I had to add a smp_mb() here which will translate into a run-time synch operation on IA. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * xsk: fix XDP socket ring buffer memory orderingMagnus Karlsson2019-04-161-4/+52
|/ | | | | | | | | | | | | | | | | | | | The ring buffer code of XDP sockets is missing a memory barrier on the consumer side between the load of the data and the write that signals that it is ok for the producer to put new data into the buffer. On architectures that does not guarantee that stores are not reordered with older loads, the producer might put data into the ring before the consumer had the chance to read it. As IA does guarantee this ordering, it would only need a compiler barrier here, but there are no primitives in Linux for this specific case (hinder writes to be ordered before older reads) so I had to add a smp_mb() here which will translate into a run-time synch operation on IA. Added a longish comment in the code explaining what each barrier in the ring implementation accomplishes and what would happen if we removed one of them. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* tools/bpftool: show btf_id in map listingPrashant Bhole2019-04-161-0/+6
| | | | | | | | | | | | | | | | | | | | | | | Let's print btf id of map similar to the way we are printing it for programs. Sample output: user@test# bpftool map -f 61: lpm_trie flags 0x1 key 20B value 8B max_entries 1 memlock 4096B 133: array name test_btf_id flags 0x0 key 4B value 4B max_entries 4 memlock 4096B pinned /sys/fs/bpf/test100 btf_id 174 170: array name test_btf_id flags 0x0 key 4B value 4B max_entries 4 memlock 4096B btf_id 240 Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* tools/bpftool: re-organize newline printing for map listingPrashant Bhole2019-04-161-2/+3
| | | | | | | | | | | | | Let's move the final newline printing in show_map_close_plain() at the end of the function because it looks correct and consistent with prog.c. Also let's do related changes for the line which prints pinned file name. Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpftool: Support sysctl hookAndrey Ignatov2019-04-166-8/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for recently added BPF_PROG_TYPE_CGROUP_SYSCTL program type and BPF_CGROUP_SYSCTL attach type. Example of bpftool output with sysctl program from selftests: # bpftool p load ./test_sysctl_prog.o /mnt/bpf/sysctl_prog type cgroup/sysctl # bpftool p l 9: cgroup_sysctl name sysctl_tcp_mem tag 0dd05f81a8d0d52e gpl loaded_at 2019-04-16T12:57:27-0700 uid 0 xlated 1008B jited 623B memlock 4096B # bpftool c a /mnt/cgroup2/bla sysctl id 9 # bpftool c t CgroupPath ID AttachType AttachFlags Name /mnt/cgroup2/bla 9 sysctl sysctl_tcp_mem # bpftool c d /mnt/cgroup2/bla sysctl id 9 # bpftool c t CgroupPath ID AttachType AttachFlags Name Signed-off-by: Andrey Ignatov <rdna@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* libbpf: fix printf formatter for ptrdiff_t argumentAndrii Nakryiko2019-04-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | Using %ld for printing out value of ptrdiff_t type is not portable between 32-bit and 64-bit archs. This is causing compilation errors for libbpf on 32-bit platform (discovered as part of an effort to integrate libbpf into systemd ([0])). Proper formatter is %td, which is used in this patch. v2->v1: - add Reported-by - provide more context on how this issue was discovered [0] https://github.com/systemd/systemd/pull/12151 Reported-by: Evgeny Vereshchagin <evvers@ya.ru> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Alexei Starovoitov <ast@fb.com> Cc: Yonghong Song <yhs@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpf: use BPF_CAST_CALL for casting bpf callPrashant Bhole2019-04-161-3/+2
| | | | | | | | | verifier.c uses BPF_CAST_CALL for casting bpf call except at one place in jit_subprogs(). Let's use the macro for consistency. Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpf: allow clearing all sock_ops callback flagsViet Hoang Tran2019-04-162-3/+9
| | | | | | | | | | | | | | | The helper function bpf_sock_ops_cb_flags_set() can be used to both set and clear the sock_ops callback flags. However, its current behavior is not consistent. BPF program may clear a flag if more than one were set, or replace a flag with another one, but cannot clear all flags. This patch also updates the documentation to clarify the ability to clear flags of this helper function. Signed-off-by: Hoang Tran <hoang.tran@uclouvain.be> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* selftests: bpf: add VRF test cases to lwt_ip_encap test.Peter Oskolkov2019-04-161-48/+86
| | | | | | | | | This patch adds tests validating that VRF and BPF-LWT encap work together well, as requested by David Ahern. Signed-off-by: Peter Oskolkov <posk@google.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpf: add map helper functions push, pop, peek in more BPF programsAlban Crequy2019-04-163-0/+18
| | | | | | | | | | | | | | | | commit f1a2e44a3aec ("bpf: add queue and stack maps") introduced new BPF helper functions: - BPF_FUNC_map_push_elem - BPF_FUNC_map_pop_elem - BPF_FUNC_map_peek_elem but they were made available only for network BPF programs. This patch makes them available for tracepoint, cgroup and lirc programs. Signed-off-by: Alban Crequy <alban@kinvolk.io> Cc: Mauricio Vasquez B <mauricio.vasquez@polito.it> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* selftests/bpf: make flow dissector tests more extensibleStanislav Fomichev2019-04-161-81/+116
| | | | | | | | | | Rewrite selftest to iterate over an array with input packet and expected flow_keys. This should make it easier to extend this test with additional cases without too much boilerplate. Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* selftests/bpf: two scale testsAlexei Starovoitov2019-04-162-0/+88
| | | | | | | | Add two tests to check that sequence of 1024 jumps is verifiable. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* bpftool: Improve handling of ENOSPC on reuseport_array map dumpsBenjamin Poirier2019-04-161-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | avoids outputting a series of value: No space left on device The value itself is not wrong but bpf_fd_reuseport_array_lookup_elem() can only return it if the map was created with value_size = 8. There's nothing bpftool can do about it. Instead of repeating this error for every key in the map, print an explanatory warning and a specialized error. example before: key: 00 00 00 00 value: No space left on device key: 01 00 00 00 value: No space left on device key: 02 00 00 00 value: No space left on device Found 0 elements example after: Warning: cannot read values from reuseport_sockarray map with value_size != 8 key: 00 00 00 00 value: <cannot read> key: 01 00 00 00 value: <cannot read> key: 02 00 00 00 value: <cannot read> Found 0 elements Signed-off-by: Benjamin Poirier <bpoirier@suse.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* bpftool: Use print_entry_error() in case of ENOENT when dumpingBenjamin Poirier2019-04-161-19/+14
| | | | | | | | | | | | | | | | | | | Commit bf598a8f0f77 ("bpftool: Improve handling of ENOENT on map dumps") used print_entry_plain() in case of ENOENT. However, that commit introduces dead code. Per-cpu maps are zero-filled. When reading them, it's all or nothing. There will never be a case where some cpus have an entry and others don't. The truth is that ENOENT is an error case. Use print_entry_error() to output the desired message. That function's "value" parameter is also renamed to indicate that we never use it for an actual map value. The output format is unchanged. Signed-off-by: Benjamin Poirier <bpoirier@suse.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* tools: bpftool: add a note on program statistics in man pageQuentin Monnet2019-04-161-0/+8
| | | | | | | | | | | | | | | | Linux kernel now supports statistics for BPF programs, and bpftool is able to dump them. However, these statistics are not enabled by default, and administrators may not know how to access them. Add a paragraph in bpftool documentation, under the description of the "bpftool prog show" command, to explain that such statistics are available and that their collection is controlled via a dedicated sysctl knob. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* tools: bpftool: fix short option name for printing version in man pagesQuentin Monnet2019-04-167-7/+7
| | | | | | | | | | | Manual pages would tell that option "-v" (lower case) would print the version number for bpftool. This is wrong: the short name of the option is "-V" (upper case). Fix the documentation accordingly. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* tools: bpftool: fix man page documentation for "pinmaps" keywordQuentin Monnet2019-04-161-1/+1
| | | | | | | | | | | | The "pinmaps" keyword is present in the man page, in the verbose description of the "bpftool prog load" command. However, it is missing from the summary of available commands at the beginning of the file. Add it there as well. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* tools: bpftool: reset errno for "bpftool cgroup tree"Quentin Monnet2019-04-161-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | When trying to dump the tree of all cgroups under a given root node, bpftool attempts to query programs of all available attach types. Some of those attach types do not support queries, therefore several of the calls are actually expected to fail. Those calls set errno to EINVAL, which has no consequence for dumping the rest of the tree. It does have consequences however if errno is inspected at a later time. For example, bpftool batch mode relies on errno to determine whether a command has succeeded, and whether it should carry on with the next command. Setting errno to EINVAL when everything worked as expected would therefore make such command fail: # echo 'cgroup tree \n net show' | \ bpftool batch file - To improve this, reset errno when its value is EINVAL after attempting to show programs for all existing attach types in do_show_tree_fn(). Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* tools: bpftool: remove blank line after btf_id when listing programsQuentin Monnet2019-04-161-1/+1
| | | | | | | | | | | | | Commit 569b0c77735d ("tools/bpftool: show btf id in program information") made bpftool print an empty line after each program entry when listing the BPF programs loaded on the system (plain output). This is especially confusing when some programs have an associated BTF id, and others don't. Let's remove the blank line. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* bpf: reserve flags in bpf_skb_net_shrinkWillem de Bruijn2019-04-161-0/+3
| | | | | | | | | | The ENCAP flags in bpf_skb_adjust_room are ignored on decap with bpf_skb_net_shrink. Reserve these bits for future use. Fixes: 868d523535c2d ("bpf: add bpf_skb_adjust_room encap flags") Signed-off-by: Willem de Bruijn <willemb@google.com> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* bpf: fix whitespace for ENCAP_L2 defines in bpf.hAlan Maguire2019-04-162-6/+6
| | | | | | | | replace tab after #define with space in line with rest of definitions Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* selftests/bpf: bring back (void *) cast to set_ipv4_csum in test_tc_tunnelStanislav Fomichev2019-04-161-1/+1
| | | | | | | | | | | | | | | | | | | | It was removed in commit 166b5a7f2ca3 ("selftests_bpf: extend test_tc_tunnel for UDP encap") without any explanation. Otherwise I see: progs/test_tc_tunnel.c:160:17: warning: taking address of packed member 'ip' of class or structure 'v4hdr' may result in an unaligned pointer value [-Waddress-of-packed-member] set_ipv4_csum(&h_outer.ip); ^~~~~~~~~~ 1 warning generated. Cc: Alan Maguire <alan.maguire@oracle.com> Cc: Willem de Bruijn <willemdebruijn.kernel@gmail.com> Fixes: 166b5a7f2ca3 ("selftests_bpf: extend test_tc_tunnel for UDP encap") Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Song Liu <songliubraving@fb.com> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* selftests/btf: add VAR and DATASEC case for dedup testsAndrii Nakryiko2019-04-161-0/+49
| | | | | | | | | | | | Add test case verifying that dedup happens (INTs are deduped in this case) and VAR/DATASEC types are not deduped, but have their referenced type IDs adjusted correctly. Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Yonghong Song <yhs@fb.com> Cc: Alexei Starovoitov <ast@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* btf: add support for VAR and DATASEC in btf_dedup()Andrii Nakryiko2019-04-161-2/+27
| | | | | | | | | | | | | This patch adds support for VAR and DATASEC in btf_dedup(). VAR/DATASEC are never deduplicated, but they need to be processed anyway as types they refer to might need to be remapped due to deduplication and compaction. Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Yonghong Song <yhs@fb.com> Cc: Alexei Starovoitov <ast@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* kbuild: handle old pahole more gracefully when generating BTFAndrii Nakryiko2019-04-161-1/+1
| | | | | | | | | | | | | | | | | | | | When CONFIG_DEBUG_INFO_BTF is enabled but available version of pahole is too old to support BTF generation, build script is supposed to emit warning and proceed with the build. Due to using exit instead of return from BASH function, existing handling code prematurely exits exit code 0, not completing some of the build steps. This patch fixes issue by correctly returning just from gen_btf() function only. Fixes: e83b9f55448a ("kbuild: add ability to generate BTF type info for vmlinux") Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Alexei Starovoitov <ast@fb.com> Cc: Yonghong Song <yhs@fb.com> Cc: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
* bpf: refactor "check_reg_arg" to eliminate code redundancyJiong Wang2019-04-121-6/+8
| | | | | | | | | | | There are a few "regs[regno]" here are there across "check_reg_arg", this patch factor it out into a simple "reg" pointer. The intention is to simplify code indentation and make the later patches in this set look cleaner. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpf: factor out reg and stack slot propagation into "propagate_liveness_reg"Jiong Wang2019-04-121-10/+20
| | | | | | | | | | After code refactor in previous patches, the propagation logic inside the for loop in "propagate_liveness" becomes clear that they are good enough to be factored out into a common function "propagate_liveness_reg". Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpf: refactor propagate_liveness to eliminate code redundanceJiong Wang2019-04-121-14/+20
| | | | | | | | | | | | Access to reg states were not factored out, the consequence is long code for dereferencing them which made the indentation not good for reading. This patch factor out these code so the core code in the loop could be easier to follow. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpf: refactor propagate_liveness to eliminate duplicated for loopJiong Wang2019-04-121-3/+1
| | | | | | | | | | Propagation for register and stack slot are finished in separate for loop, while they are perfect to be put into a single loop. This could also let them share some common variables in later patches. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpf: Fix distinct pointer types warning for ARCH=i386Andrey Ignatov2019-04-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Fix a new warning reported by kbuild for make ARCH=i386: In file included from kernel/bpf/cgroup.c:11:0: kernel/bpf/cgroup.c: In function '__cgroup_bpf_run_filter_sysctl': include/linux/kernel.h:827:29: warning: comparison of distinct pointer types lacks a cast (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1))) ^ include/linux/kernel.h:841:4: note: in expansion of macro '__typecheck' (__typecheck(x, y) && __no_side_effects(x, y)) ^~~~~~~~~~~ include/linux/kernel.h:851:24: note: in expansion of macro '__safe_cmp' __builtin_choose_expr(__safe_cmp(x, y), \ ^~~~~~~~~~ include/linux/kernel.h:860:19: note: in expansion of macro '__careful_cmp' #define min(x, y) __careful_cmp(x, y, <) ^~~~~~~~~~~~~ >> kernel/bpf/cgroup.c:837:17: note: in expansion of macro 'min' ctx.new_len = min(PAGE_SIZE, *pcount); ^~~ Fixes: 4e63acdff864 ("bpf: Introduce bpf_sysctl_{get,set}_new_value helpers") Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* Merge branch 'bpf-sysctl-hook'Alexei Starovoitov2019-04-1219-8/+2697
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Andrey Ignatov says: ==================== v2->v3: - simplify C based selftests by relying on variable offset stack access. v1->v2: - add fs/proc/proc_sysctl.c mainteners to Cc:. The patch set introduces new BPF hook for sysctl. It adds new program type BPF_PROG_TYPE_CGROUP_SYSCTL and attach type BPF_CGROUP_SYSCTL. BPF_CGROUP_SYSCTL hook is placed before calling to sysctl's proc_handler so that accesses (read/write) to sysctl can be controlled for specific cgroup and either allowed or denied, or traced. The hook has access to sysctl name, current sysctl value and (on write only) to new sysctl value via corresponding helpers. New sysctl value can be overridden by program. Both name and values (current/new) are represented as strings same way they're visible in /proc/sys/. It is up to program to parse these strings. To help with parsing the most common kind of sysctl value, vector of integers, two new helpers are provided: bpf_strtol and bpf_strtoul with semantic similar to user space strtol(3) and strtoul(3). The hook also provides bpf_sysctl context with two fields: * @write indicates whether sysctl is being read (= 0) or written (= 1); * @file_pos is sysctl file position to read from or write to, can be overridden. The hook allows to make better isolation for containerized applications that are run as root so that one container can't change a sysctl and affect all other containers on a host, make changes to allowed sysctl in a safer way and simplify sysctl tracing for cgroups. Patch 1 is preliminary refactoring. Patch 2 adds new program and attach types. Patches 3-5 implement helpers to access sysctl name and value. Patch 6 adds file_pos field to bpf_sysctl context. Patch 7 updates UAPI in tools. Patches 8-9 add support for the new hook to libbpf and corresponding test. Patches 10-14 add selftests for the new hook. Patch 15 adds support for new arg types to verifier: pointer to integer. Patch 16 adds bpf_strto{l,ul} helpers to parse integers from sysctl value. Patch 17 updates UAPI in tools. Patch 18 updates bpf_helpers.h. Patch 19 adds selftests for pointer to integer in verifier. Patches 20-21 add selftests for bpf_strto{l,ul}, including integration C based test for sysctl value parsing. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * selftests/bpf: C based test for sysctl and strtoXAndrey Ignatov2019-04-122-1/+126
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add C based test for a few bpf_sysctl_* helpers and bpf_strtoul. Make sure that sysctl can be identified by name and that multiple integers can be parsed from sysctl value with bpf_strtoul. net/ipv4/tcp_mem is chosen as a testing sysctl, it contains 3 unsigned longs, they all are parsed and compared (val[0] < val[1] < val[2]). Example of output: # ./test_sysctl ... Test case: C prog: deny all writes .. [PASS] Test case: C prog: deny access by name .. [PASS] Test case: C prog: read tcp_mem .. [PASS] Summary: 39 PASSED, 0 FAILED Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * selftests/bpf: Test bpf_strtol and bpf_strtoul helpersAndrey Ignatov2019-04-121-0/+485
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Test that bpf_strtol and bpf_strtoul helpers can be used to convert provided buffer to long or unsigned long correspondingly and return both correct result and number of consumed bytes, or proper errno. Example of output: # ./test_sysctl .. Test case: bpf_strtoul one number string .. [PASS] Test case: bpf_strtoul multi number string .. [PASS] Test case: bpf_strtoul buf_len = 0, reject .. [PASS] Test case: bpf_strtoul supported base, ok .. [PASS] Test case: bpf_strtoul unsupported base, EINVAL .. [PASS] Test case: bpf_strtoul buf with spaces only, EINVAL .. [PASS] Test case: bpf_strtoul negative number, EINVAL .. [PASS] Test case: bpf_strtol negative number, ok .. [PASS] Test case: bpf_strtol hex number, ok .. [PASS] Test case: bpf_strtol max long .. [PASS] Test case: bpf_strtol overflow, ERANGE .. [PASS] Summary: 36 PASSED, 0 FAILED Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * selftests/bpf: Test ARG_PTR_TO_LONG arg typeAndrey Ignatov2019-04-121-0/+160
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Test that verifier handles new argument types properly, including uninitialized or partially initialized value, misaligned stack access, etc. Example of output: #456/p ARG_PTR_TO_LONG uninitialized OK #457/p ARG_PTR_TO_LONG half-uninitialized OK #458/p ARG_PTR_TO_LONG misaligned OK #459/p ARG_PTR_TO_LONG size < sizeof(long) OK #460/p ARG_PTR_TO_LONG initialized OK Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * selftests/bpf: Add sysctl and strtoX helpers to bpf_helpers.hAndrey Ignatov2019-04-121-0/+19
| | | | | | | | | | | | | | Add bpf_sysctl_* and bpf_strtoX helpers to bpf_helpers.h. Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * bpf: Sync bpf.h to tools/Andrey Ignatov2019-04-121-1/+50
| | | | | | | | | | | | | | Sync bpf_strtoX related bpf UAPI changes to tools/. Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * bpf: Introduce bpf_strtol and bpf_strtoul helpersAndrey Ignatov2019-04-124-1/+187
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add bpf_strtol and bpf_strtoul to convert a string to long and unsigned long correspondingly. It's similar to user space strtol(3) and strtoul(3) with a few changes to the API: * instead of NUL-terminated C string the helpers expect buffer and buffer length; * resulting long or unsigned long is returned in a separate result-argument; * return value is used to indicate success or failure, on success number of consumed bytes is returned that can be used to identify position to read next if the buffer is expected to contain multiple integers; * instead of *base* argument, *flags* is used that provides base in 5 LSB, other bits are reserved for future use; * number of supported bases is limited. Documentation for the new helpers is provided in bpf.h UAPI. The helpers are made available to BPF_PROG_TYPE_CGROUP_SYSCTL programs to be able to convert string input to e.g. "ulongvec" output. E.g. "net/ipv4/tcp_mem" consists of three ulong integers. They can be parsed by calling to bpf_strtoul three times. Implementation notes: Implementation includes "../../lib/kstrtox.h" to reuse integer parsing functions. It's done exactly same way as fs/proc/base.c already does. Unfortunately existing kstrtoX function can't be used directly since they fail if any invalid character is present right after integer in the string. Existing simple_strtoX functions can't be used either since they're obsolete and don't handle overflow properly. Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * bpf: Introduce ARG_PTR_TO_{INT,LONG} arg typesAndrey Ignatov2019-04-122-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the way to pass result from BPF helper to BPF program is to provide memory area defined by pointer and size: func(void *, size_t). It works great for generic use-case, but for simple types, such as int, it's overkill and consumes two arguments when it could use just one. Introduce new argument types ARG_PTR_TO_INT and ARG_PTR_TO_LONG to be able to pass result from helper to program via pointer to int and long correspondingly: func(int *) or func(long *). New argument types are similar to ARG_PTR_TO_MEM with the following differences: * they don't require corresponding ARG_CONST_SIZE argument, predefined access sizes are used instead (32bit for int, 64bit for long); * it's possible to use more than one such an argument in a helper; * provided pointers have to be aligned. It's easy to introduce similar ARG_PTR_TO_CHAR and ARG_PTR_TO_SHORT argument types. It's not done due to lack of use-case though. Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * selftests/bpf: Test file_pos field in bpf_sysctl ctxAndrey Ignatov2019-04-121-0/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Test access to file_pos field of bpf_sysctl context, both read (incl. narrow read) and write. # ./test_sysctl ... Test case: ctx:file_pos sysctl:read read ok .. [PASS] Test case: ctx:file_pos sysctl:read read ok narrow .. [PASS] Test case: ctx:file_pos sysctl:read write ok .. [PASS] ... Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>