diff options
author | Byungchul Park <byungchul.park@lge.com> | 2016-01-15 16:07:49 +0900 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2016-04-12 07:33:14 -0700 |
commit | aab265efebfab78ba49e76c4bc77da87fd576134 (patch) | |
tree | 38aa618a6b8f25045182be2feb5d35d70abaa151 /kernel | |
parent | 5fe34a9655e6844da16f3625b176bd94e4080c9a (diff) | |
download | linux-stable-aab265efebfab78ba49e76c4bc77da87fd576134.tar.gz linux-stable-aab265efebfab78ba49e76c4bc77da87fd576134.tar.bz2 linux-stable-aab265efebfab78ba49e76c4bc77da87fd576134.zip |
sched/fair: Avoid using decay_load_missed() with a negative value
commit 7400d3bbaa229eb8e7631d28fb34afd7cd2c96ff upstream.
decay_load_missed() cannot handle nagative values, so we need to prevent
using the function with a negative value.
Reported-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Byungchul Park <byungchul.park@lge.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: perterz@infradead.org
Fixes: 59543275488d ("sched/fair: Prepare __update_cpu_load() to handle active tickless")
Link: http://lkml.kernel.org/r/20160115070749.GA1914@X58A-UD3R
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/fair.c | 12 |
1 files changed, 10 insertions, 2 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 56b7d4b83947..adff850e5d42 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4459,9 +4459,17 @@ static void __update_cpu_load(struct rq *this_rq, unsigned long this_load, /* scale is effectively 1 << i now, and >> i divides by scale */ - old_load = this_rq->cpu_load[i] - tickless_load; + old_load = this_rq->cpu_load[i]; old_load = decay_load_missed(old_load, pending_updates - 1, i); - old_load += tickless_load; + if (tickless_load) { + old_load -= decay_load_missed(tickless_load, pending_updates - 1, i); + /* + * old_load can never be a negative value because a + * decayed tickless_load cannot be greater than the + * original tickless_load. + */ + old_load += tickless_load; + } new_load = this_load; /* * Round up the averaging division if load is increasing. This |