summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* hsr: use netdev_err() instead of WARN_ONCE()Taehee Yoo2020-02-291-1/+2
| | | | | | | | | | | When HSR interface is sending a frame, it finds a node with the destination ethernet address from the list. If there is no node, it calls WARN_ONCE(). But, using WARN_ONCE() for this situation is a little bit overdoing. So, in this patch, the netdev_err() is used instead. Signed-off-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* hsr: use extack error message instead of netdev_infoTaehee Yoo2020-02-295-21/+35
| | | | | | | | If HSR uses the extack instead of netdev_info(), users can get error messages immediately without any checking the kernel message. Signed-off-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* hsr: use debugfs_remove_recursive() instead of debugfs_remove()Taehee Yoo2020-02-292-5/+1
| | | | | | | | If it uses debugfs_remove_recursive() instead of debugfs_remove(), hsr_priv() doesn't need to have "node_tbl_file" pointer variable. Signed-off-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: ag71xx: port to phylinkOleksij Rempel2020-02-292-72/+118
| | | | | | | | | The port to phylink was done as close as possible to initial functionality. Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'net-ll_temac-RX-TX-ring-size-and-coalesce-ethtool-parameters'David S. Miller2020-02-292-52/+154
|\ | | | | | | | | | | | | | | | | | | | | | | | | Esben Haabendal says: ==================== net: ll_temac: RX/TX ring size and coalesce ethtool parameters This series adds support for RX/TX ring size and irq coalesce ethtool parameters to ll_temac driver. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: ll_temac: Add ethtool support for coalesce parametersEsben Haabendal2020-02-292-22/+81
| | | | | | | | | | | | | | | | | | Please note that the delays are calculated based on typical parameters. But as TEMAC is an HDL IP, designs may vary, and future work might be needed to make this calculation configurable. Signed-off-by: Esben Haabendal <esben@geanix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: ll_temac: Make RX/TX ring sizes configurableEsben Haabendal2020-02-292-26/+72
| | | | | | | | | | | | | | | | | | Add support for setting the RX and TX ring sizes for this driver using ethtool. Also increase the default RX ring size as the previous default was far too low for good performance in some configurations. Signed-off-by: Esben Haabendal <esben@geanix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: ll_temac: Remove unused start_p variableEsben Haabendal2020-02-291-2/+1
| | | | | | | | | | | | | | | | | | The start_p variable was included in the initial commit, commit 92744989533c ("net: add Xilinx ll_temac device driver"), but has never had any real use. Signed-off-by: Esben Haabendal <esben@geanix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: ll_temac: Remove unused tx_bd_next struct fieldEsben Haabendal2020-02-292-2/+0
|/ | | | | | | | | The tx_bd_next field was included in the initial commit, commit 92744989533c ("net: add Xilinx ll_temac device driver"), but has never had any real use. Signed-off-by: Esben Haabendal <esben@geanix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: sched: Replace zero-length array with flexible-array memberGustavo A. R. Silva2020-02-295-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member[1][2], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being inadvertently introduced[3] to the codebase from now on. Also, notice that, dynamic memory allocations won't be affected by this change: "Flexible array members have incomplete type, and so the sizeof operator may not be applied. As a quirk of the original implementation of zero-length arrays, sizeof evaluates to zero."[1] This issue was found with the help of Coccinelle. [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://github.com/KSPP/linux/issues/21 [3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller2020-02-2949-499/+1383
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Alexei Starovoitov says: ==================== pull-request: bpf-next 2020-02-28 The following pull-request contains BPF updates for your *net-next* tree. We've added 41 non-merge commits during the last 7 day(s) which contain a total of 49 files changed, 1383 insertions(+), 499 deletions(-). The main changes are: 1) BPF and Real-Time nicely co-exist. 2) bpftool feature improvements. 3) retrieve bpf_sk_storage via INET_DIAG. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * Merge branch 'bpf_sk_storage_via_inet_diag'Alexei Starovoitov2020-02-2714-185/+599
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Martin KaFai Lau says: ==================== The bpf_prog can store specific info to a sk by using bpf_sk_storage. In other words, a sk can be extended by a bpf_prog. This series is to support providing bpf_sk_storage data during inet_diag's dump. The primary target is the usage like iproute2's "ss". The first two patches are refactoring works in inet_diag to make adding bpf_sk_storage support easier. The next two patches do the actual work. Please see individual patch for details. v2: - Add commit message for u16 to u32 change in min_dump_alloc in Patch 4 (Song) - Add comment to explain the !skb->len check in __inet_diag_dump in Patch 4. - Do the map->map_type check earlier in Patch 3 for readability. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| | * bpf: inet_diag: Dump bpf_sk_storages in inet_diag_dump()Martin KaFai Lau2020-02-274-2/+82
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch will dump out the bpf_sk_storages of a sk if the request has the INET_DIAG_REQ_SK_BPF_STORAGES nlattr. An array of SK_DIAG_BPF_STORAGE_REQ_MAP_FD can be specified in INET_DIAG_REQ_SK_BPF_STORAGES to select which bpf_sk_storage to dump. If no map_fd is specified, all bpf_sk_storages of a sk will be dumped. bpf_sk_storages can be added to the system at runtime. It is difficult to find a proper static value for cb->min_dump_alloc. This patch learns the nlattr size required to dump the bpf_sk_storages of a sk. If it happens to be the very first nlmsg of a dump and it cannot fit the needed bpf_sk_storages, it will try to expand the skb by "pskb_expand_head()". Instead of expanding it in inet_sk_diag_fill(), it is expanded at a sleepable context in __inet_diag_dump() so __GFP_DIRECT_RECLAIM can be used. In __inet_diag_dump(), it will retry as long as the skb is empty and the cb->min_dump_alloc becomes larger than before. cb->min_dump_alloc is bounded by KMALLOC_MAX_SIZE. The min_dump_alloc is also changed from 'u16' to 'u32' to accommodate a sk that may have a few large bpf_sk_storages. The updated cb->min_dump_alloc will also be used to allocate the skb in the next dump. This logic already exists in netlink_dump(). Here is the sample output of a locally modified 'ss' and it could be made more readable by using BTF later: [root@arch-fb-vm1 ~]# ss --bpf-map-id 14 --bpf-map-id 13 -t6an 'dst [::1]:8989' State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess ESTAB 0 0 [::1]:51072 [::1]:8989 bpf_map_id:14 value:[ 3feb ] bpf_map_id:13 value:[ 3f ] ESTAB 0 0 [::1]:51070 [::1]:8989 bpf_map_id:14 value:[ 3feb ] bpf_map_id:13 value:[ 3f ] [root@arch-fb-vm1 ~]# ~/devshare/github/iproute2/misc/ss --bpf-maps -t6an 'dst [::1]:8989' State Recv-Q Send-Q Local Address:Port Peer Address:Port Process ESTAB 0 0 [::1]:51072 [::1]:8989 bpf_map_id:14 value:[ 3feb ] bpf_map_id:13 value:[ 3f ] bpf_map_id:12 value:[ 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000... total:65407 ] ESTAB 0 0 [::1]:51070 [::1]:8989 bpf_map_id:14 value:[ 3feb ] bpf_map_id:13 value:[ 3f ] bpf_map_id:12 value:[ 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000... total:65407 ] Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200225230427.1976129-1-kafai@fb.com
| | * bpf: INET_DIAG support in bpf_sk_storageMartin KaFai Lau2020-02-275-6/+346
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds INET_DIAG support to bpf_sk_storage. 1. Although this series adds bpf_sk_storage diag capability to inet sk, bpf_sk_storage is in general applicable to all fullsock. Hence, the bpf_sk_storage logic will operate on SK_DIAG_* nlattr. The caller will pass in its specific nesting nlattr (e.g. INET_DIAG_*) as the argument. 2. The request will be like: INET_DIAG_REQ_SK_BPF_STORAGES (nla_nest) (defined in latter patch) SK_DIAG_BPF_STORAGE_REQ_MAP_FD (nla_put_u32) SK_DIAG_BPF_STORAGE_REQ_MAP_FD (nla_put_u32) ...... Considering there could have multiple bpf_sk_storages in a sk, instead of reusing INET_DIAG_INFO ("ss -i"), the user can select some specific bpf_sk_storage to dump by specifying an array of SK_DIAG_BPF_STORAGE_REQ_MAP_FD. If no SK_DIAG_BPF_STORAGE_REQ_MAP_FD is specified (i.e. an empty INET_DIAG_REQ_SK_BPF_STORAGES), it will dump all bpf_sk_storages of a sk. 3. The reply will be like: INET_DIAG_BPF_SK_STORAGES (nla_nest) (defined in latter patch) SK_DIAG_BPF_STORAGE (nla_nest) SK_DIAG_BPF_STORAGE_MAP_ID (nla_put_u32) SK_DIAG_BPF_STORAGE_MAP_VALUE (nla_reserve_64bit) SK_DIAG_BPF_STORAGE (nla_nest) SK_DIAG_BPF_STORAGE_MAP_ID (nla_put_u32) SK_DIAG_BPF_STORAGE_MAP_VALUE (nla_reserve_64bit) ...... 4. Unlike other INET_DIAG info of a sk which is pretty static, the size required to dump the bpf_sk_storage(s) of a sk is dynamic as the system adding more bpf_sk_storage_map. It is hard to set a static min_dump_alloc size. Hence, this series learns it at the runtime and adjust the cb->min_dump_alloc as it iterates all sk(s) of a system. The "unsigned int *res_diag_size" in bpf_sk_storage_diag_put() is for this purpose. The next patch will update the cb->min_dump_alloc as it iterates the sk(s). Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200225230421.1975729-1-kafai@fb.com
| | * inet_diag: Move the INET_DIAG_REQ_BYTECODE nlattr to cb->dataMartin KaFai Lau2020-02-278-64/+98
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The INET_DIAG_REQ_BYTECODE nlattr is currently re-found every time when the "dump()" is re-started. In a latter patch, it will also need to parse the new INET_DIAG_REQ_SK_BPF_STORAGES nlattr to learn the map_fds. Thus, this patch takes this chance to store the parsed nlattr in cb->data during the "start" time of a dump. By doing this, the "bc" argument also becomes unnecessary and is removed. Also, the two copies of the INET_DIAG_REQ_BYTECODE parsing-audit logic between compat/current version can be consolidated to one. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200225230415.1975555-1-kafai@fb.com
| | * inet_diag: Refactor inet_sk_diag_fill(), dump(), and dump_one()Martin KaFai Lau2020-02-277-113/+73
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In a latter patch, there is a need to update "cb->min_dump_alloc" in inet_sk_diag_fill() as it learns the diffierent bpf_sk_storages stored in a sk while dumping all sk(s) (e.g. tcp_hashinfo). The inet_sk_diag_fill() currently does not take the "cb" as an argument. One of the reason is inet_sk_diag_fill() is used by both dump_one() and dump() (which belong to the "struct inet_diag_handler". The dump_one() interface does not pass the "cb" along. This patch is to make dump_one() pass a "cb". The "cb" is created in inet_diag_cmd_exact(). The "nlh" and "in_skb" are stored in "cb" as the dump() interface does. The total number of args in inet_sk_diag_fill() is also cut from 10 to 7 and that helps many callers to pass fewer args. In particular, "struct user_namespace *user_ns", "u32 pid", and "u32 seq" can be replaced by accessing "cb->nlh" and "cb->skb". A similar argument reduction is also made to inet_twsk_diag_fill() and inet_req_diag_fill(). inet_csk_diag_dump() and inet_csk_diag_fill() are also removed. They are mostly equivalent to inet_sk_diag_fill(). Their repeated usages are very limited. Thus, inet_sk_diag_fill() is directly used in those occasions. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200225230409.1975173-1-kafai@fb.com
| * bpf: Replace zero-length array with flexible-array memberGustavo A. R. Silva2020-02-286-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member[1][2], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being inadvertently introduced[3] to the codebase from now on. Also, notice that, dynamic memory allocations won't be affected by this change: "Flexible array members have incomplete type, and so the sizeof operator may not be applied. As a quirk of the original implementation of zero-length arrays, sizeof evaluates to zero."[1] This issue was found with the help of Coccinelle. [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://github.com/KSPP/linux/issues/21 [3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200227001744.GA3317@embeddedor
| * Merge branch 'bpf-bpftool-probes'Daniel Borkmann2020-02-267-123/+373
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Michal Rostecki says: ==================== Feature probes in bpftool related to bpf_probe_write_user and bpf_trace_printk helpers emit dmesg warnings which might be confusing for people running bpftool on production environments. This patch series addresses that by filtering them out by default and introducing the new positional argument "full" which enables all available probes. The main motivation behind those changes is ability the fact that some probes (for example those related to "trace" or "write_user" helpers) emit dmesg messages which might be confusing for people who are running on production environments. For details see the Cilium issue[0]. v1 -> v2: - Do not expose regex filters to users, keep filtering logic internal, expose only the "full" option for including probes which emit dmesg warnings. v2 -> v3: - Do not use regex for filtering out probes, use function IDs directly. - Fix bash completion - in v2 only "prefix" was proposed after "macros", "dev" and "kernel" were not. - Rephrase the man page paragraph, highlight helper function names. - Remove tests which parse the plain output of bpftool (except the header/macros test), focus on testing JSON output instead. - Add test which compares the output with and without "full" option. v3 -> v4: - Use enum to check for helper functions. - Make selftests compatible with older versions of Python 3.x than 3.7. [0] https://github.com/cilium/cilium/issues/10048 ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * selftests/bpf: Add test for "bpftool feature" commandMichal Rostecki2020-02-264-2/+189
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add Python module with tests for "bpftool feature" command, which mainly checks whether the "full" option is working properly. Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20200226165941.6379-6-mrostecki@opensuse.org
| | * bpftool: Update bash completion for "bpftool feature" commandMichal Rostecki2020-02-261-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update bash completion for "bpftool feature" command with the new argument: "full". Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20200226165941.6379-5-mrostecki@opensuse.org
| | * bpftool: Update documentation of "bpftool feature" commandMichal Rostecki2020-02-261-9/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update documentation of "bpftool feature" command with information about new arguments: "full". Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20200226165941.6379-4-mrostecki@opensuse.org
| | * bpftool: Make probes which emit dmesg warnings optionalMichal Rostecki2020-02-261-21/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Probes related to bpf_probe_write_user and bpf_trace_printk helpers emit dmesg warnings which might be confusing for people running bpftool on production environments. This change filters them out by default and introduces the new positional argument "full" which enables all available probes. Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20200226165941.6379-3-mrostecki@opensuse.org
| | * bpftool: Move out sections to separate functionsMichal Rostecki2020-02-261-93/+126
| |/ | | | | | | | | | | | | | | | | | | | | | | Remove all calls of print_end_then_start_section function and for loops out from the do_probe function. Instead, provide separate functions for each section (like i.e. section_helpers) which are called in do_probe. This change is motivated by better readability. Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20200226165941.6379-2-mrostecki@opensuse.org
| * bpftool: Support struct_ops, tracing, ext prog typesAndrey Ignatov2020-02-264-4/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for prog types that were added to kernel but not present in bpftool yet: struct_ops, tracing, ext prog types and corresponding section names. Before: # bpftool p l ... 184: type 26 name test_subprog3 tag dda135a7dc0daf54 gpl loaded_at 2020-02-25T13:28:33-0800 uid 0 xlated 112B jited 103B memlock 4096B map_ids 136 btf_id 85 185: type 28 name new_get_skb_len tag d2de5b87d8e5dc49 gpl loaded_at 2020-02-25T13:28:33-0800 uid 0 xlated 72B jited 69B memlock 4096B map_ids 136 btf_id 85 After: # bpftool p l ... 184: tracing name test_subprog3 tag dda135a7dc0daf54 gpl loaded_at 2020-02-25T13:28:33-0800 uid 0 xlated 112B jited 103B memlock 4096B map_ids 136 btf_id 85 185: ext name new_get_skb_len tag d2de5b87d8e5dc49 gpl loaded_at 2020-02-25T13:28:33-0800 uid 0 xlated 72B jited 69B memlock 4096B map_ids 136 btf_id 85 Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200225223441.689109-1-rdna@fb.com
| * scripts/bpf: Switch to more portable python3 shebangScott Branden2020-02-261-1/+1
| | | | | | | | | | | | | | | | | | | | Change "/usr/bin/python3" to "/usr/bin/env python3" for more portable solution in bpf_helpers_doc.py. Signed-off-by: Scott Branden <scott.branden@broadcom.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200225205426.6975-1-scott.branden@broadcom.com
| * virtio_net: Add XDP meta data supportYuya Kusakabe2020-02-251-20/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement support for transferring XDP meta data into skb for virtio_net driver; before calling into the program, xdp.data_meta points to xdp.data, where on program return with pass verdict, we call into skb_metadata_set(). Tested with the script at https://github.com/higebu/virtio_net-xdp-metadata-test. Signed-off-by: Yuya Kusakabe <yuya.kusakabe@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Link: https://lore.kernel.org/bpf/20200225033212.437563-2-yuya.kusakabe@gmail.com
| * virtio_net: Keep vnet header zeroed if XDP is loaded for small bufferYuya Kusakabe2020-02-251-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | We do not want to care about the vnet header in receive_small() if XDP is loaded, since we can not know whether or not the packet is modified by XDP. Fixes: f6b10209b90d ("virtio-net: switch to use build_skb() for small buffer") Signed-off-by: Yuya Kusakabe <yuya.kusakabe@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Link: https://lore.kernel.org/bpf/20200225033212.437563-1-yuya.kusakabe@gmail.com
| * selftests/bpf: Print backtrace on SIGSEGV in test_progsAndrii Nakryiko2020-02-252-1/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Due to various bugs in tests clean up code (usually), if host system is misconfigured, it happens that test_progs will just crash in the middle of running a test with little to no indication of where and why the crash happened. For cases where coredump is not readily available (e.g., inside a CI), it's very helpful to have a stack trace, which lead to crash, to be printed out. This change adds a signal handler that will capture and print out symbolized backtrace: $ sudo ./test_progs -t mmap test_mmap:PASS:skel_open_and_load 0 nsec test_mmap:PASS:bss_mmap 0 nsec test_mmap:PASS:data_mmap 0 nsec Caught signal #11! Stack trace: ./test_progs(crash_handler+0x18)[0x42a888] /lib64/libpthread.so.0(+0xf5d0)[0x7f2aab5175d0] ./test_progs(test_mmap+0x3c0)[0x41f0a0] ./test_progs(main+0x160)[0x407d10] /lib64/libc.so.6(__libc_start_main+0xf5)[0x7f2aab15d3d5] ./test_progs[0x407ebc] [1] 1988412 segmentation fault (core dumped) sudo ./test_progs -t mmap Unfortunately, glibc's symbolization support is unable to symbolize static functions, only global ones will be present in stack trace. But it's still a step forward without adding extra libraries to get a better symbolization. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200225000847.3965188-1-andriin@fb.com
| * selftests/bpf: Run SYN cookies with reuseport BPF test only for TCPJakub Sitnicki2020-02-241-8/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we run SYN cookies test for all socket types and mark the test as skipped if socket type is not compatible. This causes confusion because skipped test might indicate a problem with the testing environment. Instead, run the test only for the socket type which supports SYN cookies. Also, switch to using designated initializers when setting up tests, so that we can tweak only some test parameters, leaving the rest initialized to default values. Fixes: eecd618b4516 ("selftests/bpf: Mark SYN cookie test skipped for UDP sockets") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200224135327.121542-2-jakub@cloudflare.com
| * selftests/bpf: Run reuseport tests only with supported socket typesJakub Sitnicki2020-02-241-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SOCKMAP and SOCKHASH map types can be used with reuseport BPF programs but don't support yet storing UDP sockets. Instead of marking UDP tests with SOCK{MAP,HASH} as skipped, don't run them at all. Skipped test might signal that the test environment is not suitable for running the test, while in reality the functionality is not implemented in the kernel yet. Before: sh# ./test_progs -t select_reuseport … #40 select_reuseport:OK Summary: 1/126 PASSED, 30 SKIPPED, 0 FAILED After: sh# ./test_progs -t select_reuseport … #40 select_reuseport:OK Summary: 1/98 PASSED, 2 SKIPPED, 0 FAILED The remaining two skipped tests are SYN cookies tests, which will be addressed in the subsequent patch. Fixes: 11318ba8cafd ("selftests/bpf: Extend SK_REUSEPORT tests to cover SOCKMAP/SOCKHASH") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200224135327.121542-1-jakub@cloudflare.com
| * Merge branch 'BPF_and_RT'Alexei Starovoitov2020-02-2418-142/+283
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Thomas Gleixner says: ==================== This is the third version of the BPF/RT patch set which makes both coexist nicely. The long explanation can be found in the cover letter of the V1 submission: https://lore.kernel.org/r/20200214133917.304937432@linutronix.de V2 is here: https://lore.kernel.org/r/20200220204517.863202864@linutronix.de The following changes vs. V2 have been made: - Rebased to bpf-next, adjusted to the lock changes in the hashmap code. - Split the preallocation enforcement patch for instrumentation type BPF programs into two pieces: 1) Emit a one-time warning on !RT kernels when any instrumentation type BPF program uses run-time allocation. Emit also a corresponding warning in the verifier log. But allow the program to run for backward compatibility sake. After a grace period this should be enforced. 2) On RT reject such programs because on RT the memory allocator cannot be called from truly atomic contexts. - Fixed the fallout from V2 as reported by Alexei and 0-day - Removed the redundant preempt_disable() from trace_call_bpf() - Removed the unused export of trace_call_bpf() ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| | * bpf/stackmap: Dont trylock mmap_sem with PREEMPT_RT and interrupts disabledDavid Miller2020-02-241-3/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In a RT kernel down_read_trylock() cannot be used from NMI context and up_read_non_owner() is another problematic issue. So in such a configuration, simply elide the annotated stackmap and just report the raw IPs. In the longer term, it might be possible to provide a atomic friendly versions of the page cache traversal which will at least provide the info if the pages are resident and don't need to be paged in. [ tglx: Use IS_ENABLED() to avoid the #ifdeffery, fixup the irq work callback and add a comment ] Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145644.708960317@linutronix.de
| | * bpf, lpm: Make locking RT friendlyThomas Gleixner2020-02-241-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The LPM trie map cannot be used in contexts like perf, kprobes and tracing as this map type dynamically allocates memory. The memory allocation happens with a raw spinlock held which is a truly spinning lock on a PREEMPT RT enabled kernel which disables preemption and interrupts. As RT does not allow memory allocation from such a section for various reasons, convert the raw spinlock to a regular spinlock. On a RT enabled kernel these locks are substituted by 'sleeping' spinlocks which provide the proper protection but keep the code preemptible. On a non-RT kernel regular spinlocks map to raw spinlocks, i.e. this does not cause any functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145644.602129531@linutronix.de
| | * bpf: Prepare hashtab locking for PREEMPT_RTThomas Gleixner2020-02-241-9/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PREEMPT_RT forbids certain operations like memory allocations (even with GFP_ATOMIC) from atomic contexts. This is required because even with GFP_ATOMIC the memory allocator calls into code pathes which acquire locks with long held lock sections. To ensure the deterministic behaviour these locks are regular spinlocks, which are converted to 'sleepable' spinlocks on RT. The only true atomic contexts on an RT kernel are the low level hardware handling, scheduling, low level interrupt handling, NMIs etc. None of these contexts should ever do memory allocations. As regular device interrupt handlers and soft interrupts are forced into thread context, the existing code which does spin_lock*(); alloc(GPF_ATOMIC); spin_unlock*(); just works. In theory the BPF locks could be converted to regular spinlocks as well, but the bucket locks and percpu_freelist locks can be taken from arbitrary contexts (perf, kprobes, tracepoints) which are required to be atomic contexts even on RT. These mechanisms require preallocated maps, so there is no need to invoke memory allocations within the lock held sections. BPF maps which need dynamic allocation are only used from (forced) thread context on RT and can therefore use regular spinlocks which in turn allows to invoke memory allocations from the lock held section. To achieve this make the hash bucket lock a union of a raw and a regular spinlock and initialize and lock/unlock either the raw spinlock for preallocated maps or the regular variant for maps which require memory allocations. On a non RT kernel this distinction is neither possible nor required. spinlock maps to raw_spinlock and the extra code and conditional is optimized out by the compiler. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145644.509685912@linutronix.de
| | * bpf: Factor out hashtab bucket lock operationsThomas Gleixner2020-02-241-23/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As a preparation for making the BPF locking RT friendly, factor out the hash bucket lock operations into inline functions. This allows to do the necessary RT modification in one place instead of sprinkling it all over the place. No functional change. The now unused htab argument of the lock/unlock functions will be used in the next step which adds PREEMPT_RT support. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145644.420416916@linutronix.de
| | * bpf: Replace open coded recursion prevention in sys_bpf()Thomas Gleixner2020-02-241-19/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The required protection is that the caller cannot be migrated to a different CPU as these functions end up in places which take either a hash bucket lock or might trigger a kprobe inside the memory allocator. Both scenarios can lead to deadlocks. The deadlock prevention is per CPU by incrementing a per CPU variable which temporarily blocks the invocation of BPF programs from perf and kprobes. Replace the open coded preempt_[dis|en]able and __this_cpu_[inc|dec] pairs with the new helper functions. These functions are already prepared to make BPF work on PREEMPT_RT enabled kernels. No functional change for !RT kernels. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145644.317843926@linutronix.de
| | * bpf: Use recursion prevention helpers in hashtab codeThomas Gleixner2020-02-241-8/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The required protection is that the caller cannot be migrated to a different CPU as these places take either a hash bucket lock or might trigger a kprobe inside the memory allocator. Both scenarios can lead to deadlocks. The deadlock prevention is per CPU by incrementing a per CPU variable which temporarily blocks the invocation of BPF programs from perf and kprobes. Replace the open coded preempt_disable/enable() and this_cpu_inc/dec() pairs with the new recursion prevention helpers to prepare BPF to work on PREEMPT_RT enabled kernels. On a non-RT kernel the migrate disable/enable in the helpers map to preempt_disable/enable(), i.e. no functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145644.211208533@linutronix.de
| | * bpf: Provide recursion prevention helpersThomas Gleixner2020-02-241-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The places which need to prevent the execution of trace type BPF programs to prevent deadlocks on the hash bucket lock do this open coded. Provide two inline functions, bpf_disable/enable_instrumentation() to replace these open coded protection constructs. Use migrate_disable/enable() instead of preempt_disable/enable() right away so this works on RT enabled kernels. On a !RT kernel migrate_disable / enable() are mapped to preempt_disable/enable(). These helpers use this_cpu_inc/dec() instead of __this_cpu_inc/dec() on an RT enabled kernel because migrate disabled regions are preemptible and preemption might hit in the middle of a RMW operation which can lead to inconsistent state. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145644.103910133@linutronix.de
| | * bpf: Use migrate_disable/enable in array macros and cgroup/lirc code.David Miller2020-02-242-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the preemption disable/enable with migrate_disable/enable() to reflect the actual requirement and to allow PREEMPT_RT to substitute it with an actual migration disable mechanism which does not disable preemption. Including the code paths that go via __bpf_prog_run_save_cb(). Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.998293311@linutronix.de
| | * bpf: Use migrate_disable/enabe() in trampoline code.David Miller2020-02-241-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of preemption disable/enable to reflect the purpose. This allows PREEMPT_RT to substitute it with an actual migration disable implementation. On non RT kernels this is still mapped to preempt_disable/enable(). Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.891428873@linutronix.de
| | * bpf/tests: Use migrate disable instead of preempt disableDavid Miller2020-02-242-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the preemption disable/enable with migrate_disable/enable() to reflect the actual requirement and to allow PREEMPT_RT to substitute it with an actual migration disable mechanism which does not disable preemption. [ tglx: Switched it over to migrate disable ] Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.785306549@linutronix.de
| | * bpf: Use bpf_prog_run_pin_on_cpu() at simple call sites.David Miller2020-02-245-18/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All of these cases are strictly of the form: preempt_disable(); BPF_PROG_RUN(...); preempt_enable(); Replace this with bpf_prog_run_pin_on_cpu() which wraps BPF_PROG_RUN() with: migrate_disable(); BPF_PROG_RUN(...); migrate_enable(); On non RT enabled kernels this maps to preempt_disable/enable() and on RT enabled kernels this solely prevents migration, which is sufficient as there is no requirement to prevent reentrancy to any BPF program from a preempting task. The only requirement is that the program stays on the same CPU. Therefore, this is a trivially correct transformation. The seccomp loop does not need protection over the loop. It only needs protection per BPF filter program [ tglx: Converted to bpf_prog_run_pin_on_cpu() ] Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.691493094@linutronix.de
| | * bpf: Replace cant_sleep() with cant_migrate()Thomas Gleixner2020-02-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As already discussed in the previous change which introduced BPF_RUN_PROG_PIN_ON_CPU() BPF only requires to disable migration to guarantee per CPUness. If RT substitutes the preempt disable based migration protection then the cant_sleep() check will obviously trigger as preemption is not disabled. Replace it by cant_migrate() which maps to cant_sleep() on a non RT kernel and will verify that migration is disabled on a full RT kernel. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.583038889@linutronix.de
| | * bpf: Provide bpf_prog_run_pin_on_cpu() helperThomas Gleixner2020-02-241-2/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BPF programs require to run on one CPU to completion as they use per CPU storage, but according to Alexei they don't need reentrancy protection as obviously BPF programs running in thread context can always be 'preempted' by hard and soft interrupts and instrumentation and the same program can run concurrently on a different CPU. The currently used mechanism to ensure CPUness is to wrap the invocation into a preempt_disable/enable() pair. Disabling preemption is also disabling migration for a task. preempt_disable/enable() is used because there is no explicit way to reliably disable only migration. Provide a separate macro to invoke a BPF program which can be used in migrateable task context. It wraps BPF_PROG_RUN() in a migrate_disable/enable() pair which maps on non RT enabled kernels to preempt_disable/enable(). On RT enabled kernels this merely disables migration. Both methods ensure that the invoked BPF program runs on one CPU to completion. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.474592620@linutronix.de
| | * bpf: Dont iterate over possible CPUs with interrupts disabledThomas Gleixner2020-02-241-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pcpu_freelist_populate() is disabling interrupts and then iterates over the possible CPUs. The reason why this disables interrupts is to silence lockdep because the invoked ___pcpu_freelist_push() takes spin locks. Neither the interrupt disabling nor the locking are required in this function because it's called during initialization and the resulting map is not yet visible to anything. Split out the actual push assignement into an inline, call it from the loop and remove the interrupt disable. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.365930116@linutronix.de
| | * bpf: Remove recursion prevention from rcu free callbackThomas Gleixner2020-02-241-8/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If an element is freed via RCU then recursion into BPF instrumentation functions is not a concern. The element is already detached from the map and the RCU callback does not hold any locks on which a kprobe, perf event or tracepoint attached BPF program could deadlock. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.259118710@linutronix.de
| | * perf/bpf: Remove preempt disable around BPF invocationThomas Gleixner2020-02-241-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The BPF invocation from the perf event overflow handler does not require to disable preemption because this is called from NMI or at least hard interrupt context which is already non-preemptible. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.151953573@linutronix.de
| | * bpf/trace: Remove redundant preempt_disable from trace_call_bpf()Thomas Gleixner2020-02-241-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Similar to __bpf_trace_run this is redundant because __bpf_trace_run() is invoked from a trace point via __DO_TRACE() which already disables preemption _before_ invoking any of the functions which are attached to a trace point. Remove it and add a cant_sleep() check. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.059995527@linutronix.de
| | * bpf: disable preemption for bpf progs attached to uprobeAlexei Starovoitov2020-02-241-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | trace_call_bpf() no longer disables preemption on its own. All callers of this function has to do it explicitly. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de>
| | * bpf/trace: Remove EXPORT from trace_call_bpf()Thomas Gleixner2020-02-241-1/+0
| | | | | | | | | | | | | | | | | | | | | All callers are built in. No point to export this. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org>