summaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorMartin KaFai Lau <kafai@fb.com>2021-11-05 18:40:14 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2022-07-12 16:34:54 +0200
commitd53c8fe9ee2919a1b82c91b46ec74bfd9caa4a64 (patch)
tree8104692ffdc31b8bdd41fb8f43de9142c29b9f51 /include
parentd3688bfa5af4557e617f5da2373ccf32c999311e (diff)
downloadlinux-stable-d53c8fe9ee2919a1b82c91b46ec74bfd9caa4a64.tar.gz
linux-stable-d53c8fe9ee2919a1b82c91b46ec74bfd9caa4a64.tar.bz2
linux-stable-d53c8fe9ee2919a1b82c91b46ec74bfd9caa4a64.zip
bpf: Stop caching subprog index in the bpf_pseudo_func insn
commit 3990ed4c426652fcd469f8c9dc08156294b36c28 upstream. This patch is to fix an out-of-bound access issue when jit-ing the bpf_pseudo_func insn (i.e. ld_imm64 with src_reg == BPF_PSEUDO_FUNC) In jit_subprog(), it currently reuses the subprog index cached in insn[1].imm. This subprog index is an index into a few array related to subprogs. For example, in jit_subprog(), it is an index to the newly allocated 'struct bpf_prog **func' array. The subprog index was cached in insn[1].imm after add_subprog(). However, this could become outdated (and too big in this case) if some subprogs are completely removed during dead code elimination (in adjust_subprog_starts_after_remove). The cached index in insn[1].imm is not updated accordingly and causing out-of-bound issue in the later jit_subprog(). Unlike bpf_pseudo_'func' insn, the current bpf_pseudo_'call' insn is handling the DCE properly by calling find_subprog(insn->imm) to figure out the index instead of caching the subprog index. The existing bpf_adj_branches() will adjust the insn->imm whenever insn is added or removed. Instead of having two ways handling subprog index, this patch is to make bpf_pseudo_func works more like bpf_pseudo_call. First change is to stop caching the subprog index result in insn[1].imm after add_subprog(). The verification process will use find_subprog(insn->imm) to figure out the subprog index. Second change is in bpf_adj_branches() and have it to adjust the insn->imm for the bpf_pseudo_func insn also whenever insn is added or removed. Third change is in jit_subprog(). Like the bpf_pseudo_call handling, bpf_pseudo_func temporarily stores the find_subprog() result in insn->off. It is fine because the prog's insn has been finalized at this point. insn->off will be reset back to 0 later to avoid confusing the userspace prog dump tool. Fixes: 69c087ba6225 ("bpf: Add bpf_for_each_map_elem() helper") Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211106014014.651018-1-kafai@fb.com Cc: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/bpf.h6
1 files changed, 6 insertions, 0 deletions
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index bb766a0b9b51..818cd594e922 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -533,6 +533,12 @@ bpf_ctx_record_field_size(struct bpf_insn_access_aux *aux, u32 size)
aux->ctx_field_size = size;
}
+static inline bool bpf_pseudo_func(const struct bpf_insn *insn)
+{
+ return insn->code == (BPF_LD | BPF_IMM | BPF_DW) &&
+ insn->src_reg == BPF_PSEUDO_FUNC;
+}
+
struct bpf_prog_ops {
int (*test_run)(struct bpf_prog *prog, const union bpf_attr *kattr,
union bpf_attr __user *uattr);