diff options
author | Jiri Olsa <jolsa@kernel.org> | 2023-09-20 23:31:40 +0200 |
---|---|---|
committer | Andrii Nakryiko <andrii@kernel.org> | 2023-09-25 16:37:44 -0700 |
commit | dd8657894c11b03c6eb0fd53fe9d7fec2072d18b (patch) | |
tree | c6c5d419041eae87c27d6c164c237a08589a8994 /kernel/trace | |
parent | 3acf8ace68230e9558cf916847f1cc9f208abdf1 (diff) | |
download | linux-stable-dd8657894c11b03c6eb0fd53fe9d7fec2072d18b.tar.gz linux-stable-dd8657894c11b03c6eb0fd53fe9d7fec2072d18b.tar.bz2 linux-stable-dd8657894c11b03c6eb0fd53fe9d7fec2072d18b.zip |
bpf: Count missed stats in trace_call_bpf
Increase misses stats in case bpf array execution is skipped
because of recursion check in trace_call_bpf.
Adding bpf_prog_inc_misses_counters that increase misses
counts for all bpf programs in bpf_prog_array.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Tested-by: Song Liu <song@kernel.org>
Reviewed-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/bpf/20230920213145.1941596-5-jolsa@kernel.org
Diffstat (limited to 'kernel/trace')
-rw-r--r-- | kernel/trace/bpf_trace.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index f6a7d2524949..df697c74d519 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -117,6 +117,9 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx) * and don't send kprobe event into ring-buffer, * so return zero here */ + rcu_read_lock(); + bpf_prog_inc_misses_counters(rcu_dereference(call->prog_array)); + rcu_read_unlock(); ret = 0; goto out; } |