diff options
Diffstat (limited to 'Documentation')
-rw-r--r-- | Documentation/bpf/bpf_design_QA.rst | 25 | ||||
-rw-r--r-- | Documentation/bpf/instruction-set.rst | 120 | ||||
-rw-r--r-- | Documentation/bpf/kfuncs.rst | 145 | ||||
-rw-r--r-- | Documentation/bpf/libbpf/libbpf_naming_convention.rst | 6 | ||||
-rw-r--r-- | Documentation/bpf/map_xskmap.rst | 2 | ||||
-rw-r--r-- | Documentation/bpf/ringbuf.rst | 4 | ||||
-rw-r--r-- | Documentation/bpf/verifier.rst | 297 | ||||
-rw-r--r-- | Documentation/conf.py | 3 | ||||
-rw-r--r-- | Documentation/netlink/specs/netdev.yaml | 100 |
9 files changed, 644 insertions, 58 deletions
diff --git a/Documentation/bpf/bpf_design_QA.rst b/Documentation/bpf/bpf_design_QA.rst index cec2371173d7..bfff0e7e37c2 100644 --- a/Documentation/bpf/bpf_design_QA.rst +++ b/Documentation/bpf/bpf_design_QA.rst @@ -208,6 +208,10 @@ data structures and compile with kernel internal headers. Both of these kernel internals are subject to change and can break with newer kernels such that the program needs to be adapted accordingly. +New BPF functionality is generally added through the use of kfuncs instead of +new helpers. Kfuncs are not considered part of the stable API, and have their own +lifecycle expectations as described in :ref:`BPF_kfunc_lifecycle_expectations`. + Q: Are tracepoints part of the stable ABI? ------------------------------------------ A: NO. Tracepoints are tied to internal implementation details hence they are @@ -236,8 +240,8 @@ A: NO. Classic BPF programs are converted into extend BPF instructions. Q: Can BPF call arbitrary kernel functions? ------------------------------------------- -A: NO. BPF programs can only call a set of helper functions which -is defined for every program type. +A: NO. BPF programs can only call specific functions exposed as BPF helpers or +kfuncs. The set of available functions is defined for every program type. Q: Can BPF overwrite arbitrary kernel memory? --------------------------------------------- @@ -263,7 +267,12 @@ Q: New functionality via kernel modules? Q: Can BPF functionality such as new program or map types, new helpers, etc be added out of kernel module code? -A: NO. +A: Yes, through kfuncs and kptrs + +The core BPF functionality such as program types, maps and helpers cannot be +added to by modules. However, modules can expose functionality to BPF programs +by exporting kfuncs (which may return pointers to module-internal data +structures as kptrs). Q: Directly calling kernel function is an ABI? ---------------------------------------------- @@ -278,7 +287,8 @@ kernel functions have already been used by other kernel tcp cc (congestion-control) implementations. If any of these kernel functions has changed, both the in-tree and out-of-tree kernel tcp cc implementations have to be changed. The same goes for the bpf -programs and they have to be adjusted accordingly. +programs and they have to be adjusted accordingly. See +:ref:`BPF_kfunc_lifecycle_expectations` for details. Q: Attaching to arbitrary kernel functions is an ABI? ----------------------------------------------------- @@ -340,6 +350,7 @@ compatibility for these features? A: NO. -Unlike map value types, there are no stability guarantees for this case. The -whole API to work with allocated objects and any support for special fields -inside them is unstable (since it is exposed through kfuncs). +Unlike map value types, the API to work with allocated objects and any support +for special fields inside them is exposed through kfuncs, and thus has the same +lifecycle expectations as the kfuncs themselves. See +:ref:`BPF_kfunc_lifecycle_expectations` for details. diff --git a/Documentation/bpf/instruction-set.rst b/Documentation/bpf/instruction-set.rst index 2d3fe59bd260..af515de5fc38 100644 --- a/Documentation/bpf/instruction-set.rst +++ b/Documentation/bpf/instruction-set.rst @@ -7,6 +7,11 @@ eBPF Instruction Set Specification, v1.0 This document specifies version 1.0 of the eBPF instruction set. +Documentation conventions +========================= + +For brevity, this document uses the type notion "u64", "u32", etc. +to mean an unsigned integer whose width is the specified number of bits. Registers and calling convention ================================ @@ -30,20 +35,56 @@ Instruction encoding eBPF has two instruction encodings: * the basic instruction encoding, which uses 64 bits to encode an instruction -* the wide instruction encoding, which appends a second 64-bit immediate value - (imm64) after the basic instruction for a total of 128 bits. +* the wide instruction encoding, which appends a second 64-bit immediate (i.e., + constant) value after the basic instruction for a total of 128 bits. + +The basic instruction encoding is as follows, where MSB and LSB mean the most significant +bits and least significant bits, respectively: + +============= ======= ======= ======= ============ +32 bits (MSB) 16 bits 4 bits 4 bits 8 bits (LSB) +============= ======= ======= ======= ============ +imm offset src_reg dst_reg opcode +============= ======= ======= ======= ============ + +**imm** + signed integer immediate value -The basic instruction encoding looks as follows: +**offset** + signed integer offset used with pointer arithmetic -============= ======= =============== ==================== ============ -32 bits (MSB) 16 bits 4 bits 4 bits 8 bits (LSB) -============= ======= =============== ==================== ============ -immediate offset source register destination register opcode -============= ======= =============== ==================== ============ +**src_reg** + the source register number (0-10), except where otherwise specified + (`64-bit immediate instructions`_ reuse this field for other purposes) + +**dst_reg** + destination register number (0-10) + +**opcode** + operation to perform Note that most instructions do not use all of the fields. Unused fields shall be cleared to zero. +As discussed below in `64-bit immediate instructions`_, a 64-bit immediate +instruction uses a 64-bit immediate value that is constructed as follows. +The 64 bits following the basic instruction contain a pseudo instruction +using the same format but with opcode, dst_reg, src_reg, and offset all set to zero, +and imm containing the high 32 bits of the immediate value. + +================= ================== +64 bits (MSB) 64 bits (LSB) +================= ================== +basic instruction pseudo instruction +================= ================== + +Thus the 64-bit immediate value is constructed as follows: + + imm64 = (next_imm << 32) | imm + +where 'next_imm' refers to the imm value of the pseudo instruction +following the basic instruction. + Instruction classes ------------------- @@ -71,27 +112,32 @@ For arithmetic and jump instructions (``BPF_ALU``, ``BPF_ALU64``, ``BPF_JMP`` an ============== ====== ================= 4 bits (MSB) 1 bit 3 bits (LSB) ============== ====== ================= -operation code source instruction class +code source instruction class ============== ====== ================= -The 4th bit encodes the source operand: +**code** + the operation code, whose meaning varies by instruction class - ====== ===== ======================================== - source value description - ====== ===== ======================================== - BPF_K 0x00 use 32-bit immediate as source operand - BPF_X 0x08 use 'src_reg' register as source operand - ====== ===== ======================================== +**source** + the source operand location, which unless otherwise specified is one of: -The four MSB bits store the operation code. + ====== ===== ============================================== + source value description + ====== ===== ============================================== + BPF_K 0x00 use 32-bit 'imm' value as source operand + BPF_X 0x08 use 'src_reg' register value as source operand + ====== ===== ============================================== +**instruction class** + the instruction class (see `Instruction classes`_) Arithmetic instructions ----------------------- ``BPF_ALU`` uses 32-bit wide operands while ``BPF_ALU64`` uses 64-bit wide operands for otherwise identical operations. -The 'code' field encodes the operation as below: +The 'code' field encodes the operation as below, where 'src' and 'dst' refer +to the values of the source and destination registers, respectively. ======== ===== ========================================================== code value description @@ -121,19 +167,21 @@ the destination register is unchanged whereas for ``BPF_ALU`` the upper ``BPF_ADD | BPF_X | BPF_ALU`` means:: - dst_reg = (u32) dst_reg + (u32) src_reg; + dst = (u32) ((u32) dst + (u32) src) + +where '(u32)' indicates that the upper 32 bits are zeroed. ``BPF_ADD | BPF_X | BPF_ALU64`` means:: - dst_reg = dst_reg + src_reg + dst = dst + src ``BPF_XOR | BPF_K | BPF_ALU`` means:: - dst_reg = (u32) dst_reg ^ (u32) imm32 + dst = (u32) dst ^ (u32) imm32 ``BPF_XOR | BPF_K | BPF_ALU64`` means:: - dst_reg = dst_reg ^ imm32 + dst = dst ^ imm32 Also note that the division and modulo operations are unsigned. Thus, for ``BPF_ALU``, 'imm' is first interpreted as an unsigned 32-bit value, whereas @@ -167,11 +215,11 @@ Examples: ``BPF_ALU | BPF_TO_LE | BPF_END`` with imm = 16 means:: - dst_reg = htole16(dst_reg) + dst = htole16(dst) ``BPF_ALU | BPF_TO_BE | BPF_END`` with imm = 64 means:: - dst_reg = htobe64(dst_reg) + dst = htobe64(dst) Jump instructions ----------------- @@ -246,15 +294,15 @@ instructions that transfer data between a register and memory. ``BPF_MEM | <size> | BPF_STX`` means:: - *(size *) (dst_reg + off) = src_reg + *(size *) (dst + offset) = src ``BPF_MEM | <size> | BPF_ST`` means:: - *(size *) (dst_reg + off) = imm32 + *(size *) (dst + offset) = imm32 ``BPF_MEM | <size> | BPF_LDX`` means:: - dst_reg = *(size *) (src_reg + off) + dst = *(size *) (src + offset) Where size is one of: ``BPF_B``, ``BPF_H``, ``BPF_W``, or ``BPF_DW``. @@ -288,11 +336,11 @@ BPF_XOR 0xa0 atomic xor ``BPF_ATOMIC | BPF_W | BPF_STX`` with 'imm' = BPF_ADD means:: - *(u32 *)(dst_reg + off16) += src_reg + *(u32 *)(dst + offset) += src ``BPF_ATOMIC | BPF_DW | BPF_STX`` with 'imm' = BPF ADD means:: - *(u64 *)(dst_reg + off16) += src_reg + *(u64 *)(dst + offset) += src In addition to the simple atomic operations, there also is a modifier and two complex atomic operations: @@ -307,16 +355,16 @@ BPF_CMPXCHG 0xf0 | BPF_FETCH atomic compare and exchange The ``BPF_FETCH`` modifier is optional for simple atomic operations, and always set for the complex atomic operations. If the ``BPF_FETCH`` flag -is set, then the operation also overwrites ``src_reg`` with the value that +is set, then the operation also overwrites ``src`` with the value that was in memory before it was modified. -The ``BPF_XCHG`` operation atomically exchanges ``src_reg`` with the value -addressed by ``dst_reg + off``. +The ``BPF_XCHG`` operation atomically exchanges ``src`` with the value +addressed by ``dst + offset``. The ``BPF_CMPXCHG`` operation atomically compares the value addressed by -``dst_reg + off`` with ``R0``. If they match, the value addressed by -``dst_reg + off`` is replaced with ``src_reg``. In either case, the -value that was at ``dst_reg + off`` before the operation is zero-extended +``dst + offset`` with ``R0``. If they match, the value addressed by +``dst + offset`` is replaced with ``src``. In either case, the +value that was at ``dst + offset`` before the operation is zero-extended and loaded back to ``R0``. 64-bit immediate instructions @@ -329,7 +377,7 @@ There is currently only one such instruction. ``BPF_LD | BPF_DW | BPF_IMM`` means:: - dst_reg = imm64 + dst = imm64 Legacy BPF Packet access instructions diff --git a/Documentation/bpf/kfuncs.rst b/Documentation/bpf/kfuncs.rst index 1a683225d080..ca96ef3f6896 100644 --- a/Documentation/bpf/kfuncs.rst +++ b/Documentation/bpf/kfuncs.rst @@ -13,7 +13,7 @@ BPF Kernel Functions or more commonly known as kfuncs are functions in the Linux kernel which are exposed for use by BPF programs. Unlike normal BPF helpers, kfuncs do not have a stable interface and can change from one kernel release to another. Hence, BPF programs need to be updated in response to changes in the -kernel. +kernel. See :ref:`BPF_kfunc_lifecycle_expectations` for more information. 2. Defining a kfunc =================== @@ -41,7 +41,7 @@ An example is given below:: __diag_ignore_all("-Wmissing-prototypes", "Global kfuncs as their definitions will be in BTF"); - struct task_struct *bpf_find_get_task_by_vpid(pid_t nr) + __bpf_kfunc struct task_struct *bpf_find_get_task_by_vpid(pid_t nr) { return find_get_task_by_vpid(nr); } @@ -66,7 +66,7 @@ kfunc with a __tag, where tag may be one of the supported annotations. This annotation is used to indicate a memory and size pair in the argument list. An example is given below:: - void bpf_memzero(void *mem, int mem__sz) + __bpf_kfunc void bpf_memzero(void *mem, int mem__sz) { ... } @@ -86,7 +86,7 @@ safety of the program. An example is given below:: - void *bpf_obj_new(u32 local_type_id__k, ...) + __bpf_kfunc void *bpf_obj_new(u32 local_type_id__k, ...) { ... } @@ -125,6 +125,20 @@ flags on a set of kfuncs as follows:: This set encodes the BTF ID of each kfunc listed above, and encodes the flags along with it. Ofcourse, it is also allowed to specify no flags. +kfunc definitions should also always be annotated with the ``__bpf_kfunc`` +macro. This prevents issues such as the compiler inlining the kfunc if it's a +static kernel function, or the function being elided in an LTO build as it's +not used in the rest of the kernel. Developers should not manually add +annotations to their kfunc to prevent these issues. If an annotation is +required to prevent such an issue with your kfunc, it is a bug and should be +added to the definition of the macro so that other kfuncs are similarly +protected. An example is given below:: + + __bpf_kfunc struct task_struct *bpf_get_task_pid(s32 pid) + { + ... + } + 2.4.1 KF_ACQUIRE flag --------------------- @@ -224,6 +238,28 @@ single argument which must be a trusted argument or a MEM_RCU pointer. The argument may have reference count of 0 and the kfunc must take this into consideration. +.. _KF_deprecated_flag: + +2.4.9 KF_DEPRECATED flag +------------------------ + +The KF_DEPRECATED flag is used for kfuncs which are scheduled to be +changed or removed in a subsequent kernel release. A kfunc that is +marked with KF_DEPRECATED should also have any relevant information +captured in its kernel doc. Such information typically includes the +kfunc's expected remaining lifespan, a recommendation for new +functionality that can replace it if any is available, and possibly a +rationale for why it is being removed. + +Note that while on some occasions, a KF_DEPRECATED kfunc may continue to be +supported and have its KF_DEPRECATED flag removed, it is likely to be far more +difficult to remove a KF_DEPRECATED flag after it's been added than it is to +prevent it from being added in the first place. As described in +:ref:`BPF_kfunc_lifecycle_expectations`, users that rely on specific kfuncs are +encouraged to make their use-cases known as early as possible, and participate +in upstream discussions regarding whether to keep, change, deprecate, or remove +those kfuncs if and when such discussions occur. + 2.5 Registering the kfuncs -------------------------- @@ -290,14 +326,107 @@ In order to accommodate such requirements, the verifier will enforce strict PTR_TO_BTF_ID type matching if two types have the exact same name, with one being suffixed with ``___init``. -3. Core kfuncs +.. _BPF_kfunc_lifecycle_expectations: + +3. kfunc lifecycle expectations +=============================== + +kfuncs provide a kernel <-> kernel API, and thus are not bound by any of the +strict stability restrictions associated with kernel <-> user UAPIs. This means +they can be thought of as similar to EXPORT_SYMBOL_GPL, and can therefore be +modified or removed by a maintainer of the subsystem they're defined in when +it's deemed necessary. + +Like any other change to the kernel, maintainers will not change or remove a +kfunc without having a reasonable justification. Whether or not they'll choose +to change a kfunc will ultimately depend on a variety of factors, such as how +widely used the kfunc is, how long the kfunc has been in the kernel, whether an +alternative kfunc exists, what the norm is in terms of stability for the +subsystem in question, and of course what the technical cost is of continuing +to support the kfunc. + +There are several implications of this: + +a) kfuncs that are widely used or have been in the kernel for a long time will + be more difficult to justify being changed or removed by a maintainer. In + other words, kfuncs that are known to have a lot of users and provide + significant value provide stronger incentives for maintainers to invest the + time and complexity in supporting them. It is therefore important for + developers that are using kfuncs in their BPF programs to communicate and + explain how and why those kfuncs are being used, and to participate in + discussions regarding those kfuncs when they occur upstream. + +b) Unlike regular kernel symbols marked with EXPORT_SYMBOL_GPL, BPF programs + that call kfuncs are generally not part of the kernel tree. This means that + refactoring cannot typically change callers in-place when a kfunc changes, + as is done for e.g. an upstreamed driver being updated in place when a + kernel symbol is changed. + + Unlike with regular kernel symbols, this is expected behavior for BPF + symbols, and out-of-tree BPF programs that use kfuncs should be considered + relevant to discussions and decisions around modifying and removing those + kfuncs. The BPF community will take an active role in participating in + upstream discussions when necessary to ensure that the perspectives of such + users are taken into account. + +c) A kfunc will never have any hard stability guarantees. BPF APIs cannot and + will not ever hard-block a change in the kernel purely for stability + reasons. That being said, kfuncs are features that are meant to solve + problems and provide value to users. The decision of whether to change or + remove a kfunc is a multivariate technical decision that is made on a + case-by-case basis, and which is informed by data points such as those + mentioned above. It is expected that a kfunc being removed or changed with + no warning will not be a common occurrence or take place without sound + justification, but it is a possibility that must be accepted if one is to + use kfuncs. + +3.1 kfunc deprecation +--------------------- + +As described above, while sometimes a maintainer may find that a kfunc must be +changed or removed immediately to accommodate some changes in their subsystem, +usually kfuncs will be able to accommodate a longer and more measured +deprecation process. For example, if a new kfunc comes along which provides +superior functionality to an existing kfunc, the existing kfunc may be +deprecated for some period of time to allow users to migrate their BPF programs +to use the new one. Or, if a kfunc has no known users, a decision may be made +to remove the kfunc (without providing an alternative API) after some +deprecation period so as to provide users with a window to notify the kfunc +maintainer if it turns out that the kfunc is actually being used. + +It's expected that the common case will be that kfuncs will go through a +deprecation period rather than being changed or removed without warning. As +described in :ref:`KF_deprecated_flag`, the kfunc framework provides the +KF_DEPRECATED flag to kfunc developers to signal to users that a kfunc has been +deprecated. Once a kfunc has been marked with KF_DEPRECATED, the following +procedure is followed for removal: + +1. Any relevant information for deprecated kfuncs is documented in the kfunc's + kernel docs. This documentation will typically include the kfunc's expected + remaining lifespan, a recommendation for new functionality that can replace + the usage of the deprecated function (or an explanation as to why no such + replacement exists), etc. + +2. The deprecated kfunc is kept in the kernel for some period of time after it + was first marked as deprecated. This time period will be chosen on a + case-by-case basis, and will typically depend on how widespread the use of + the kfunc is, how long it has been in the kernel, and how hard it is to move + to alternatives. This deprecation time period is "best effort", and as + described :ref:`above<BPF_kfunc_lifecycle_expectations>`, circumstances may + sometimes dictate that the kfunc be removed before the full intended + deprecation period has elapsed. + +3. After the deprecation period the kfunc will be removed. At this point, BPF + programs calling the kfunc will be rejected by the verifier. + +4. Core kfuncs ============== The BPF subsystem provides a number of "core" kfuncs that are potentially applicable to a wide variety of different possible use cases and programs. Those kfuncs are documented here. -3.1 struct task_struct * kfuncs +4.1 struct task_struct * kfuncs ------------------------------- There are a number of kfuncs that allow ``struct task_struct *`` objects to be @@ -373,7 +502,7 @@ Here is an example of it being used: return 0; } -3.2 struct cgroup * kfuncs +4.2 struct cgroup * kfuncs -------------------------- ``struct cgroup *`` objects also have acquire and release functions: @@ -488,7 +617,7 @@ the verifier. bpf_cgroup_ancestor() can be used as follows: return 0; } -3.3 struct cpumask * kfuncs +4.3 struct cpumask * kfuncs --------------------------- BPF provides a set of kfuncs that can be used to query, allocate, mutate, and diff --git a/Documentation/bpf/libbpf/libbpf_naming_convention.rst b/Documentation/bpf/libbpf/libbpf_naming_convention.rst index c5ac97f3d4c4..b5b41b61b3c0 100644 --- a/Documentation/bpf/libbpf/libbpf_naming_convention.rst +++ b/Documentation/bpf/libbpf/libbpf_naming_convention.rst @@ -83,8 +83,8 @@ This prevents from accidentally exporting a symbol, that is not supposed to be a part of ABI what, in turn, improves both libbpf developer- and user-experiences. -ABI versionning ---------------- +ABI versioning +-------------- To make future ABI extensions possible libbpf ABI is versioned. Versioning is implemented by ``libbpf.map`` version script that is @@ -148,7 +148,7 @@ API documentation convention The libbpf API is documented via comments above definitions in header files. These comments can be rendered by doxygen and sphinx for well organized html output. This section describes the -convention in which these comments should be formated. +convention in which these comments should be formatted. Here is an example from btf.h: diff --git a/Documentation/bpf/map_xskmap.rst b/Documentation/bpf/map_xskmap.rst index 7093b8208451..dc143edd9233 100644 --- a/Documentation/bpf/map_xskmap.rst +++ b/Documentation/bpf/map_xskmap.rst @@ -178,7 +178,7 @@ The following code snippet shows how to update an XSKMAP with an XSK entry. For an example on how create AF_XDP sockets, please see the AF_XDP-example and AF_XDP-forwarding programs in the `bpf-examples`_ directory in the `libxdp`_ repository. -For a detailed explaination of the AF_XDP interface please see: +For a detailed explanation of the AF_XDP interface please see: - `libxdp-readme`_. - `AF_XDP`_ kernel documentation. diff --git a/Documentation/bpf/ringbuf.rst b/Documentation/bpf/ringbuf.rst index 6a615cd62bda..a99cd05d79d4 100644 --- a/Documentation/bpf/ringbuf.rst +++ b/Documentation/bpf/ringbuf.rst @@ -124,7 +124,7 @@ buffer. Currently 4 are supported: - ``BPF_RB_AVAIL_DATA`` returns amount of unconsumed data in ring buffer; - ``BPF_RB_RING_SIZE`` returns the size of ring buffer; -- ``BPF_RB_CONS_POS``/``BPF_RB_PROD_POS`` returns current logical possition +- ``BPF_RB_CONS_POS``/``BPF_RB_PROD_POS`` returns current logical position of consumer/producer, respectively. Returned values are momentarily snapshots of ring buffer state and could be @@ -146,7 +146,7 @@ Design and Implementation This reserve/commit schema allows a natural way for multiple producers, either on different CPUs or even on the same CPU/in the same BPF program, to reserve independent records and work with them without blocking other producers. This -means that if BPF program was interruped by another BPF program sharing the +means that if BPF program was interrupted by another BPF program sharing the same ring buffer, they will both get a record reserved (provided there is enough space left) and can work with it and submit it independently. This applies to NMI context as well, except that due to using a spinlock during diff --git a/Documentation/bpf/verifier.rst b/Documentation/bpf/verifier.rst index d4326caf01f9..f0ec19db301c 100644 --- a/Documentation/bpf/verifier.rst +++ b/Documentation/bpf/verifier.rst @@ -192,7 +192,7 @@ checked and found to be non-NULL, all copies can become PTR_TO_MAP_VALUEs. As well as range-checking, the tracked information is also used for enforcing alignment of pointer accesses. For instance, on most systems the packet pointer is 2 bytes after a 4-byte alignment. If a program adds 14 bytes to that to jump -over the Ethernet header, then reads IHL and addes (IHL * 4), the resulting +over the Ethernet header, then reads IHL and adds (IHL * 4), the resulting pointer will have a variable offset known to be 4n+2 for some n, so adding the 2 bytes (NET_IP_ALIGN) gives a 4-byte alignment and so word-sized accesses through that pointer are safe. @@ -316,6 +316,301 @@ Pruning considers not only the registers but also the stack (and any spilled registers it may hold). They must all be safe for the branch to be pruned. This is implemented in states_equal(). +Some technical details about state pruning implementation could be found below. + +Register liveness tracking +-------------------------- + +In order to make state pruning effective, liveness state is tracked for each +register and stack slot. The basic idea is to track which registers and stack +slots are actually used during subseqeuent execution of the program, until +program exit is reached. Registers and stack slots that were never used could be +removed from the cached state thus making more states equivalent to a cached +state. This could be illustrated by the following program:: + + 0: call bpf_get_prandom_u32() + 1: r1 = 0 + 2: if r0 == 0 goto +1 + 3: r0 = 1 + --- checkpoint --- + 4: r0 = r1 + 5: exit + +Suppose that a state cache entry is created at instruction #4 (such entries are +also called "checkpoints" in the text below). The verifier could reach the +instruction with one of two possible register states: + +* r0 = 1, r1 = 0 +* r0 = 0, r1 = 0 + +However, only the value of register ``r1`` is important to successfully finish +verification. The goal of the liveness tracking algorithm is to spot this fact +and figure out that both states are actually equivalent. + +Data structures +~~~~~~~~~~~~~~~ + +Liveness is tracked using the following data structures:: + + enum bpf_reg_liveness { + REG_LIVE_NONE = 0, + REG_LIVE_READ32 = 0x1, + REG_LIVE_READ64 = 0x2, + REG_LIVE_READ = REG_LIVE_READ32 | REG_LIVE_READ64, + REG_LIVE_WRITTEN = 0x4, + REG_LIVE_DONE = 0x8, + }; + + struct bpf_reg_state { + ... + struct bpf_reg_state *parent; + ... + enum bpf_reg_liveness live; + ... + }; + + struct bpf_stack_state { + struct bpf_reg_state spilled_ptr; + ... + }; + + struct bpf_func_state { + struct bpf_reg_state regs[MAX_BPF_REG]; + ... + struct bpf_stack_state *stack; + } + + struct bpf_verifier_state { + struct bpf_func_state *frame[MAX_CALL_FRAMES]; + struct bpf_verifier_state *parent; + ... + } + +* ``REG_LIVE_NONE`` is an initial value assigned to ``->live`` fields upon new + verifier state creation; + +* ``REG_LIVE_WRITTEN`` means that the value of the register (or stack slot) is + defined by some instruction verified between this verifier state's parent and + verifier state itself; + +* ``REG_LIVE_READ{32,64}`` means that the value of the register (or stack slot) + is read by a some child state of this verifier state; + +* ``REG_LIVE_DONE`` is a marker used by ``clean_verifier_state()`` to avoid + processing same verifier state multiple times and for some sanity checks; + +* ``->live`` field values are formed by combining ``enum bpf_reg_liveness`` + values using bitwise or. + +Register parentage chains +~~~~~~~~~~~~~~~~~~~~~~~~~ + +In order to propagate information between parent and child states, a *register +parentage chain* is established. Each register or stack slot is linked to a +corresponding register or stack slot in its parent state via a ``->parent`` +pointer. This link is established upon state creation in ``is_state_visited()`` +and might be modified by ``set_callee_state()`` called from +``__check_func_call()``. + +The rules for correspondence between registers / stack slots are as follows: + +* For the current stack frame, registers and stack slots of the new state are + linked to the registers and stack slots of the parent state with the same + indices. + +* For the outer stack frames, only caller saved registers (r6-r9) and stack + slots are linked to the registers and stack slots of the parent state with the + same indices. + +* When function call is processed a new ``struct bpf_func_state`` instance is + allocated, it encapsulates a new set of registers and stack slots. For this + new frame, parent links for r6-r9 and stack slots are set to nil, parent links + for r1-r5 are set to match caller r1-r5 parent links. + +This could be illustrated by the following diagram (arrows stand for +``->parent`` pointers):: + + ... ; Frame #0, some instructions + --- checkpoint #0 --- + 1 : r6 = 42 ; Frame #0 + --- checkpoint #1 --- + 2 : call foo() ; Frame #0 + ... ; Frame #1, instructions from foo() + --- checkpoint #2 --- + ... ; Frame #1, instructions from foo() + --- checkpoint #3 --- + exit ; Frame #1, return from foo() + 3 : r1 = r6 ; Frame #0 <- current state + + +-------------------------------+-------------------------------+ + | Frame #0 | Frame #1 | + Checkpoint +-------------------------------+-------------------------------+ + #0 | r0 | r1-r5 | r6-r9 | fp-8 ... | + +-------------------------------+ + ^ ^ ^ ^ + | | | | + Checkpoint +-------------------------------+ + #1 | r0 | r1-r5 | r6-r9 | fp-8 ... | + +-------------------------------+ + ^ ^ ^ + |_______|_______|_______________ + | | | + nil nil | | | nil nil + | | | | | | | + Checkpoint +-------------------------------+-------------------------------+ + #2 | r0 | r1-r5 | r6-r9 | fp-8 ... | r0 | r1-r5 | r6-r9 | fp-8 ... | + +-------------------------------+-------------------------------+ + ^ ^ ^ ^ ^ + nil nil | | | | | + | | | | | | | + Checkpoint +-------------------------------+-------------------------------+ + #3 | r0 | r1-r5 | r6-r9 | fp-8 ... | r0 | r1-r5 | r6-r9 | fp-8 ... | + +-------------------------------+-------------------------------+ + ^ ^ + nil nil | | + | | | | + Current +-------------------------------+ + state | r0 | r1-r5 | r6-r9 | fp-8 ... | + +-------------------------------+ + \ + r6 read mark is propagated via these links + all the way up to checkpoint #1. + The checkpoint #1 contains a write mark for r6 + because of instruction (1), thus read propagation + does not reach checkpoint #0 (see section below). + +Liveness marks tracking +~~~~~~~~~~~~~~~~~~~~~~~ + +For each processed instruction, the verifier tracks read and written registers +and stack slots. The main idea of the algorithm is that read marks propagate +back along the state parentage chain until they hit a write mark, which 'screens +off' earlier states from the read. The information about reads is propagated by +function ``mark_reg_read()`` which could be summarized as follows:: + + mark_reg_read(struct bpf_reg_state *state, ...): + parent = state->parent + while parent: + if state->live & REG_LIVE_WRITTEN: + break + if parent->live & REG_LIVE_READ64: + break + parent->live |= REG_LIVE_READ64 + state = parent + parent = state->parent + +Notes: + +* The read marks are applied to the **parent** state while write marks are + applied to the **current** state. The write mark on a register or stack slot + means that it is updated by some instruction in the straight-line code leading + from the parent state to the current state. + +* Details about REG_LIVE_READ32 are omitted. + +* Function ``propagate_liveness()`` (see section :ref:`read_marks_for_cache_hits`) + might override the first parent link. Please refer to the comments in the + ``propagate_liveness()`` and ``mark_reg_read()`` source code for further + details. + +Because stack writes could have different sizes ``REG_LIVE_WRITTEN`` marks are +applied conservatively: stack slots are marked as written only if write size +corresponds to the size of the register, e.g. see function ``save_register_state()``. + +Consider the following example:: + + 0: (*u64)(r10 - 8) = 0 ; define 8 bytes of fp-8 + --- checkpoint #0 --- + 1: (*u32)(r10 - 8) = 1 ; redefine lower 4 bytes + 2: r1 = (*u32)(r10 - 8) ; read lower 4 bytes defined at (1) + 3: r2 = (*u32)(r10 - 4) ; read upper 4 bytes defined at (0) + +As stated above, the write at (1) does not count as ``REG_LIVE_WRITTEN``. Should +it be otherwise, the algorithm above wouldn't be able to propagate the read mark +from (3) to checkpoint #0. + +Once the ``BPF_EXIT`` instruction is reached ``update_branch_counts()`` is +called to update the ``->branches`` counter for each verifier state in a chain +of parent verifier states. When the ``->branches`` counter reaches zero the +verifier state becomes a valid entry in a set of cached verifier states. + +Each entry of the verifier states cache is post-processed by a function +``clean_live_states()``. This function marks all registers and stack slots +without ``REG_LIVE_READ{32,64}`` marks as ``NOT_INIT`` or ``STACK_INVALID``. +Registers/stack slots marked in this way are ignored in function ``stacksafe()`` +called from ``states_equal()`` when a state cache entry is considered for +equivalence with a current state. + +Now it is possible to explain how the example from the beginning of the section +works:: + + 0: call bpf_get_prandom_u32() + 1: r1 = 0 + 2: if r0 == 0 goto +1 + 3: r0 = 1 + --- checkpoint[0] --- + 4: r0 = r1 + 5: exit + +* At instruction #2 branching point is reached and state ``{ r0 == 0, r1 == 0, pc == 4 }`` + is pushed to states processing queue (pc stands for program counter). + +* At instruction #4: + + * ``checkpoint[0]`` states cache entry is created: ``{ r0 == 1, r1 == 0, pc == 4 }``; + * ``checkpoint[0].r0`` is marked as written; + * ``checkpoint[0].r1`` is marked as read; + +* At instruction #5 exit is reached and ``checkpoint[0]`` can now be processed + by ``clean_live_states()``. After this processing ``checkpoint[0].r0`` has a + read mark and all other registers and stack slots are marked as ``NOT_INIT`` + or ``STACK_INVALID`` + +* The state ``{ r0 == 0, r1 == 0, pc == 4 }`` is popped from the states queue + and is compared against a cached state ``{ r1 == 0, pc == 4 }``, the states + are considered equivalent. + +.. _read_marks_for_cache_hits: + +Read marks propagation for cache hits +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Another point is the handling of read marks when a previously verified state is +found in the states cache. Upon cache hit verifier must behave in the same way +as if the current state was verified to the program exit. This means that all +read marks, present on registers and stack slots of the cached state, must be +propagated over the parentage chain of the current state. Example below shows +why this is important. Function ``propagate_liveness()`` handles this case. + +Consider the following state parentage chain (S is a starting state, A-E are +derived states, -> arrows show which state is derived from which):: + + r1 read + <------------- A[r1] == 0 + C[r1] == 0 + S ---> A ---> B ---> exit E[r1] == 1 + | + ` ---> C ---> D + | + ` ---> E ^ + |___ suppose all these + ^ states are at insn #Y + | + suppose all these + states are at insn #X + +* Chain of states ``S -> A -> B -> exit`` is verified first. + +* While ``B -> exit`` is verified, register ``r1`` is read and this read mark is + propagated up to state ``A``. + +* When chain of states ``C -> D`` is verified the state ``D`` turns out to be + equivalent to state ``B``. + +* The read mark for ``r1`` has to be propagated to state ``C``, otherwise state + ``C`` might get mistakenly marked as equivalent to state ``E`` even though + values for register ``r1`` differ between ``C`` and ``E``. + Understanding eBPF verifier messages ==================================== diff --git a/Documentation/conf.py b/Documentation/conf.py index d927737e3c10..8b4e5451a02d 100644 --- a/Documentation/conf.py +++ b/Documentation/conf.py @@ -116,6 +116,9 @@ if major >= 3: # include/linux/linkage.h: "asmlinkage", + + # include/linux/btf.h + "__bpf_kfunc", ] else: diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml new file mode 100644 index 000000000000..b4dcdae54ffd --- /dev/null +++ b/Documentation/netlink/specs/netdev.yaml @@ -0,0 +1,100 @@ +name: netdev + +doc: + netdev configuration over generic netlink. + +definitions: + - + type: flags + name: xdp-act + entries: + - + name: basic + doc: + XDP feautues set supported by all drivers + (XDP_ABORTED, XDP_DROP, XDP_PASS, XDP_TX) + - + name: redirect + doc: + The netdev supports XDP_REDIRECT + - + name: ndo-xmit + doc: + This feature informs if netdev implements ndo_xdp_xmit callback. + - + name: xsk-zerocopy + doc: + This feature informs if netdev supports AF_XDP in zero copy mode. + - + name: hw-offload + doc: + This feature informs if netdev supports XDP hw oflloading. + - + name: rx-sg + doc: + This feature informs if netdev implements non-linear XDP buffer + support in the driver napi callback. + - + name: ndo-xmit-sg + doc: + This feature informs if netdev implements non-linear XDP buffer + support in ndo_xdp_xmit callback. + +attribute-sets: + - + name: dev + attributes: + - + name: ifindex + doc: netdev ifindex + type: u32 + value: 1 + checks: + min: 1 + - + name: pad + type: pad + - + name: xdp-features + doc: Bitmask of enabled xdp-features. + type: u64 + enum: xdp-act + enum-as-flags: true + +operations: + list: + - + name: dev-get + doc: Get / dump information about a netdev. + value: 1 + attribute-set: dev + do: + request: + attributes: + - ifindex + reply: &dev-all + attributes: + - ifindex + - xdp-features + dump: + reply: *dev-all + - + name: dev-add-ntf + doc: Notification about device appearing. + notify: dev-get + mcgrp: mgmt + - + name: dev-del-ntf + doc: Notification about device disappearing. + notify: dev-get + mcgrp: mgmt + - + name: dev-change-ntf + doc: Notification about device configuration being changed. + notify: dev-get + mcgrp: mgmt + +mcast-groups: + list: + - + name: mgmt |