diff options
author | Andrii Nakryiko <andrii@kernel.org> | 2023-05-04 21:33:14 -0700 |
---|---|---|
committer | Alexei Starovoitov <ast@kernel.org> | 2023-05-04 22:35:35 -0700 |
commit | c50c0b57a515826b5d2e1ce85cd85f24f0da10c2 (patch) | |
tree | 4a381a53814e9cf96470affc3b3e3933f0490c28 /kernel/bounds.c | |
parent | f655badf2a8fc028433d9583bf86a6b473721f09 (diff) | |
download | linux-stable-c50c0b57a515826b5d2e1ce85cd85f24f0da10c2.tar.gz linux-stable-c50c0b57a515826b5d2e1ce85cd85f24f0da10c2.tar.bz2 linux-stable-c50c0b57a515826b5d2e1ce85cd85f24f0da10c2.zip |
bpf: fix mark_all_scalars_precise use in mark_chain_precision
When precision backtracking bails out due to some unsupported sequence
of instructions (e.g., stack access through register other than r10), we
need to mark all SCALAR registers as precise to be safe. Currently,
though, we mark SCALARs precise only starting from the state we detected
unsupported condition, which could be one of the parent states of the
actual current state. This will leave some registers potentially not
marked as precise, even though they should. So make sure we start
marking scalars as precise from current state (env->cur_state).
Further, we don't currently detect a situation when we end up with some
stack slots marked as needing precision, but we ran out of available
states to find the instructions that populate those stack slots. This is
akin the `i >= func->allocated_stack / BPF_REG_SIZE` check and should be
handled similarly by falling back to marking all SCALARs precise. Add
this check when we run out of states.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20230505043317.3629845-8-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'kernel/bounds.c')
0 files changed, 0 insertions, 0 deletions