summaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorDimitri Sivanich <sivanich@sgi.com>2008-10-31 08:03:41 -0500
committerIngo Molnar <mingo@elte.hu>2008-11-03 11:29:00 +0100
commite113a745f693af196c8081b328bf42def086989b (patch)
tree70d0576dfebdd0207093372b70115776f03bc16e /kernel
parent45beca08dd8b6d6a65c5ffd730af2eac7a2c7a03 (diff)
downloadlinux-stable-e113a745f693af196c8081b328bf42def086989b.tar.gz
linux-stable-e113a745f693af196c8081b328bf42def086989b.tar.bz2
linux-stable-e113a745f693af196c8081b328bf42def086989b.zip
sched/rt: small optimization to update_curr_rt()
Impact: micro-optimization to SCHED_FIFO/RR scheduling A very minor improvement, but might it be better to check sched_rt_runtime(rt_rq) before taking the rt_runtime_lock? Peter Zijlstra observes: > Yes, I think its ok to do so. > > Like pointed out in the other thread, there are two races: > > - sched_rt_runtime() going to RUNTIME_INF, and that will be handled > properly by sched_rt_runtime_exceeded() > > - sched_rt_runtime() going to !RUNTIME_INF, and here we can miss an > accounting cycle, but I don't think that is something to worry too > much about. Signed-off-by: Dimitri Sivanich <sivanich@sgi.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu> -- kernel/sched_rt.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched_rt.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index d9ba9d5f99d6..c7963d5d0625 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -537,13 +537,13 @@ static void update_curr_rt(struct rq *rq)
for_each_sched_rt_entity(rt_se) {
rt_rq = rt_rq_of_se(rt_se);
- spin_lock(&rt_rq->rt_runtime_lock);
if (sched_rt_runtime(rt_rq) != RUNTIME_INF) {
+ spin_lock(&rt_rq->rt_runtime_lock);
rt_rq->rt_time += delta_exec;
if (sched_rt_runtime_exceeded(rt_rq))
resched_task(curr);
+ spin_unlock(&rt_rq->rt_runtime_lock);
}
- spin_unlock(&rt_rq->rt_runtime_lock);
}
}