diff options
author | Delyan Kratunov <delyank@fb.com> | 2022-05-11 18:28:36 +0000 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2022-05-12 00:37:11 +0200 |
commit | 9c2136be0878c88c53dea26943ce40bb03ad8d8d (patch) | |
tree | 63ea6e07f82cc650db9a4d14d87f4b82df34a9c6 /kernel/sched | |
parent | c5eb0a61238dd6faf37f58c9ce61c9980aaffd7a (diff) | |
download | linux-9c2136be0878c88c53dea26943ce40bb03ad8d8d.tar.gz linux-9c2136be0878c88c53dea26943ce40bb03ad8d8d.tar.bz2 linux-9c2136be0878c88c53dea26943ce40bb03ad8d8d.zip |
sched/tracing: Append prev_state to tp args instead
Commit fa2c3254d7cf (sched/tracing: Don't re-read p->state when emitting
sched_switch event, 2022-01-20) added a new prev_state argument to the
sched_switch tracepoint, before the prev task_struct pointer.
This reordering of arguments broke BPF programs that use the raw
tracepoint (e.g. tp_btf programs). The type of the second argument has
changed and existing programs that assume a task_struct* argument
(e.g. for bpf_task_storage access) will now fail to verify.
If we instead append the new argument to the end, all existing programs
would continue to work and can conditionally extract the prev_state
argument on supported kernel versions.
Fixes: fa2c3254d7cf (sched/tracing: Don't re-read p->state when emitting sched_switch event, 2022-01-20)
Signed-off-by: Delyan Kratunov <delyank@fb.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Link: https://lkml.kernel.org/r/c8a6930dfdd58a4a5755fc01732675472979732b.camel@fb.com
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/core.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 51efaabac3e4..d58c0389eb23 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6382,7 +6382,7 @@ static void __sched notrace __schedule(unsigned int sched_mode) migrate_disable_switch(rq, prev); psi_sched_switch(prev, next, !task_on_rq_queued(prev)); - trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev_state, prev, next); + trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state); /* Also unlocks the rq: */ rq = context_switch(rq, prev, next, &rf); |