diff options
author | Vincent Guittot <vincent.guittot@linaro.org> | 2020-06-24 17:44:22 +0200 |
---|---|---|
committer | Borislav Petkov <bp@suse.de> | 2020-06-28 17:01:20 +0200 |
commit | e21cf43406a190adfcc4bfe592768066fb3aaa9b (patch) | |
tree | e5f7d137033c19c399fc285d5931a737f7be4f2f | |
parent | 8c4890d1c3358fb8023d46e1e554c41d54f02878 (diff) | |
download | linux-stable-e21cf43406a190adfcc4bfe592768066fb3aaa9b.tar.gz linux-stable-e21cf43406a190adfcc4bfe592768066fb3aaa9b.tar.bz2 linux-stable-e21cf43406a190adfcc4bfe592768066fb3aaa9b.zip |
sched/cfs: change initial value of runnable_avg
Some performance regression on reaim benchmark have been raised with
commit 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
The problem comes from the init value of runnable_avg which is initialized
with max value. This can be a problem if the newly forked task is finally
a short task because the group of CPUs is wrongly set to overloaded and
tasks are pulled less agressively.
Set initial value of runnable_avg equals to util_avg to reflect that there
is no waiting time so far.
Fixes: 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200624154422.29166-1-vincent.guittot@linaro.org
-rw-r--r-- | kernel/sched/fair.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index cbcb2f71599b..658aa7a2ae6f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -806,7 +806,7 @@ void post_init_entity_util_avg(struct task_struct *p) } } - sa->runnable_avg = cpu_scale; + sa->runnable_avg = sa->util_avg; if (p->sched_class != &fair_sched_class) { /* |