summaryrefslogtreecommitdiffstats
path: root/kernel/cpuset.c
diff options
context:
space:
mode:
authorOleg Nesterov <oleg@redhat.com>2010-03-15 10:10:19 +0100
committerIngo Molnar <mingo@elte.hu>2010-04-02 20:12:02 +0200
commit30da688ef6b76e01969b00608202fff1eed2accc (patch)
treef4068cb8cf29f1d93d8489b162f41b7ac15a3d0c /kernel/cpuset.c
parentc1804d547dc098363443667609c272d1e4d15ee8 (diff)
downloadlinux-30da688ef6b76e01969b00608202fff1eed2accc.tar.gz
linux-30da688ef6b76e01969b00608202fff1eed2accc.tar.bz2
linux-30da688ef6b76e01969b00608202fff1eed2accc.zip
sched: sched_exec(): Remove the select_fallback_rq() logic
sched_exec()->select_task_rq() reads/updates ->cpus_allowed lockless. This can race with other CPUs updating our ->cpus_allowed, and this looks meaningless to me. The task is current and running, it must have online cpus in ->cpus_allowed, the fallback mode is bogus. And, if ->sched_class returns the "wrong" cpu, this likely means we raced with set_cpus_allowed() which was called for reason, why should sched_exec() retry and call ->select_task_rq() again? Change the code to call sched_class->select_task_rq() directly and do nothing if the returned cpu is wrong after re-checking under rq->lock. From now task_struct->cpus_allowed is always stable under TASK_WAKING, select_fallback_rq() is always called under rq-lock or the caller or the caller owns TASK_WAKING (select_task_rq). Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20100315091019.GA9141@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/cpuset.c')
0 files changed, 0 insertions, 0 deletions