diff options
author | Peter Zijlstra <peterz@infradead.org> | 2021-04-20 10:18:17 +0200 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2021-04-21 13:55:42 +0200 |
commit | 3a7956e25e1d7b3c148569e78895e1f3178122a9 (patch) | |
tree | 3fed8e39328534e9aa499002bb4d8890bb75eac9 /kernel/sched | |
parent | ad789f84c9a145f8a18744c0387cec22ec51651e (diff) | |
download | linux-stable-3a7956e25e1d7b3c148569e78895e1f3178122a9.tar.gz linux-stable-3a7956e25e1d7b3c148569e78895e1f3178122a9.tar.bz2 linux-stable-3a7956e25e1d7b3c148569e78895e1f3178122a9.zip |
kthread: Fix PF_KTHREAD vs to_kthread() race
The kthread_is_per_cpu() construct relies on only being called on
PF_KTHREAD tasks (per the WARN in to_kthread). This gives rise to the
following usage pattern:
if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p))
However, as reported by syzcaller, this is broken. The scenario is:
CPU0 CPU1 (running p)
(p->flags & PF_KTHREAD) // true
begin_new_exec()
me->flags &= ~(PF_KTHREAD|...);
kthread_is_per_cpu(p)
to_kthread(p)
WARN(!(p->flags & PF_KTHREAD) <-- *SPLAT*
Introduce __to_kthread() that omits the WARN and is sure to check both
values.
Use this to remove the problematic pattern for kthread_is_per_cpu()
and fix a number of other kthread_*() functions that have similar
issues but are currently not used in ways that would expose the
problem.
Notably kthread_func() is only ever called on 'current', while
kthread_probe_data() is only used for PF_WQ_WORKER, which implies the
task is from kthread_create*().
Fixes: ac687e6e8c26 ("kthread: Extract KTHREAD_IS_PER_CPU")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <Valentin.Schneider@arm.com>
Link: https://lkml.kernel.org/r/YH6WJc825C4P0FCK@hirez.programming.kicks-ass.net
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/core.c | 2 | ||||
-rw-r--r-- | kernel/sched/fair.c | 2 |
2 files changed, 2 insertions, 2 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index fcb35ae15619..4a0668acd876 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7667,7 +7667,7 @@ static void balance_push(struct rq *rq) * histerical raisins. */ if (rq->idle == push_task || - ((push_task->flags & PF_KTHREAD) && kthread_is_per_cpu(push_task)) || + kthread_is_per_cpu(push_task) || is_migration_disabled(push_task)) { /* diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 7ea3b93f5268..1d75af1ecfb4 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7612,7 +7612,7 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env) return 0; /* Disregard pcpu kthreads; they are where they need to be. */ - if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p)) + if (kthread_is_per_cpu(p)) return 0; if (!cpumask_test_cpu(env->dst_cpu, p->cpus_ptr)) { |