diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2012-06-22 15:52:09 +0200 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2012-07-19 08:58:56 -0700 |
commit | 7490d0a4cfefa16f9d8ce636eb5b2e13d2432db3 (patch) | |
tree | 70ee2418549bd1af674a7bda391140a45a5e11ce /kernel/sched/sched.h | |
parent | 667fb5508900340d657645e0bfc9bf210a1fc363 (diff) | |
download | linux-stable-7490d0a4cfefa16f9d8ce636eb5b2e13d2432db3.tar.gz linux-stable-7490d0a4cfefa16f9d8ce636eb5b2e13d2432db3.tar.bz2 linux-stable-7490d0a4cfefa16f9d8ce636eb5b2e13d2432db3.zip |
sched/nohz: Rewrite and fix load-avg computation -- again
commit 5167e8d5417bf5c322a703d2927daec727ea40dd upstream.
Thanks to Charles Wang for spotting the defects in the current code:
- If we go idle during the sample window -- after sampling, we get a
negative bias because we can negate our own sample.
- If we wake up during the sample window we get a positive bias
because we push the sample to a known active period.
So rewrite the entire nohz load-avg muck once again, now adding
copious documentation to the code.
Reported-and-tested-by: Doug Smythies <dsmythies@telus.net>
Reported-and-tested-by: Charles Wang <muming.wq@gmail.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/1340373782.18025.74.camel@twins
[ minor edits ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'kernel/sched/sched.h')
-rw-r--r-- | kernel/sched/sched.h | 2 |
1 files changed, 0 insertions, 2 deletions
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index fb3acba4d52e..116ced06ecc0 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -940,8 +940,6 @@ static inline u64 sched_avg_period(void) return (u64)sysctl_sched_time_avg * NSEC_PER_MSEC / 2; } -void calc_load_account_idle(struct rq *this_rq); - #ifdef CONFIG_SCHED_HRTICK /* |