diff options
author | John Blackwood <john.blackwood@ccur.com> | 2008-08-26 15:09:43 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-08-28 11:13:24 +0200 |
commit | f3ade837808121ff8bab9c56725f4fe40ec85a56 (patch) | |
tree | eb9a8d87bff0a7d11eade583a7582e3c765e3b80 /kernel/sched_rt.c | |
parent | 354879bb977e06695993435745f06a0f6d39ce2b (diff) | |
download | linux-stable-f3ade837808121ff8bab9c56725f4fe40ec85a56.tar.gz linux-stable-f3ade837808121ff8bab9c56725f4fe40ec85a56.tar.bz2 linux-stable-f3ade837808121ff8bab9c56725f4fe40ec85a56.zip |
sched: fix sched_rt_rq_enqueue() resched idle
When sysctl_sched_rt_runtime is set to something other than -1 and the
CONFIG_RT_GROUP_SCHED kernel parameter is NOT enabled, we get into a state
where we see one or more CPUs idling forvever even though there are
real-time
tasks in their rt runqueue that are able to run (no longer throttled).
The sequence is:
- A real-time task is running when the timer sets the rt runqueue
to throttled, and the rt task is resched_task()ed and switched
out, and idle is switched in since there are no non-rt tasks to
run on that cpu.
- Eventually the do_sched_rt_period_timer() runs and un-throttles
the rt runqueue, but we just exit the timer interrupt and go back
to executing the idle task in the idle loop forever.
If we change the sched_rt_rq_enqueue() routine to use some of the code
from the CONFIG_RT_GROUP_SCHED enabled version of this same routine and
resched_task() the currently executing task (idle in our case) if it is
a lower priority task than the higher rt task in the now un-throttled
runqueue, the problem is no longer observed.
Signed-off-by: John Blackwood <john.blackwood@ccur.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_rt.c')
-rw-r--r-- | kernel/sched_rt.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c index 998ba54b4543..07d9b3307907 100644 --- a/kernel/sched_rt.c +++ b/kernel/sched_rt.c @@ -199,6 +199,8 @@ static inline struct rt_rq *group_rt_rq(struct sched_rt_entity *rt_se) static inline void sched_rt_rq_enqueue(struct rt_rq *rt_rq) { + if (rt_rq->rt_nr_running) + resched_task(rq_of_rt_rq(rt_rq)->curr); } static inline void sched_rt_rq_dequeue(struct rt_rq *rt_rq) |