diff options
author | Peter Zijlstra <peterz@infradead.org> | 2015-05-15 17:43:34 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2015-08-12 12:06:09 +0200 |
commit | 25834c73f93af7f0712c98ca4593691592e6b360 (patch) | |
tree | 5da4f7c4da2a85ba6458a4de234da5e3d0a0c27a /include/linux/kthread.h | |
parent | 7855a35ac07a350e2cd26f09568a6d8e372be358 (diff) | |
download | linux-25834c73f93af7f0712c98ca4593691592e6b360.tar.gz linux-25834c73f93af7f0712c98ca4593691592e6b360.tar.bz2 linux-25834c73f93af7f0712c98ca4593691592e6b360.zip |
sched: Fix a race between __kthread_bind() and sched_setaffinity()
Because sched_setscheduler() checks p->flags & PF_NO_SETAFFINITY
without locks, a caller might observe an old value and race with the
set_cpus_allowed_ptr() call from __kthread_bind() and effectively undo
it:
__kthread_bind()
do_set_cpus_allowed()
<SYSCALL>
sched_setaffinity()
if (p->flags & PF_NO_SETAFFINITIY)
set_cpus_allowed_ptr()
p->flags |= PF_NO_SETAFFINITY
Fix the bug by putting everything under the regular scheduler locks.
This also closes a hole in the serialization of task_struct::{nr_,}cpus_allowed.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dedekind1@gmail.com
Cc: juri.lelli@arm.com
Cc: mgorman@suse.de
Cc: riel@redhat.com
Cc: rostedt@goodmis.org
Link: http://lkml.kernel.org/r/20150515154833.545640346@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include/linux/kthread.h')
-rw-r--r-- | include/linux/kthread.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/kthread.h b/include/linux/kthread.h index 13d55206ccf6..869b21dcf503 100644 --- a/include/linux/kthread.h +++ b/include/linux/kthread.h @@ -38,6 +38,7 @@ struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data), }) void kthread_bind(struct task_struct *k, unsigned int cpu); +void kthread_bind_mask(struct task_struct *k, const struct cpumask *mask); int kthread_stop(struct task_struct *k); bool kthread_should_stop(void); bool kthread_should_park(void); |