summaryrefslogtreecommitdiffstats
path: root/kernel/sched
diff options
context:
space:
mode:
authorQais Yousef <qais.yousef@arm.com>2022-08-04 15:36:06 +0100
committerPeter Zijlstra <peterz@infradead.org>2022-10-27 11:01:19 +0200
commitc56ab1b3506ba0e7a872509964b100912bde165d (patch)
tree8d3ab721db07560dd9ae6df27db359d7094a69eb /kernel/sched
parenta2e7f03ed28fce26c78b985f87913b6ce3accf9d (diff)
downloadlinux-stable-c56ab1b3506ba0e7a872509964b100912bde165d.tar.gz
linux-stable-c56ab1b3506ba0e7a872509964b100912bde165d.tar.bz2
linux-stable-c56ab1b3506ba0e7a872509964b100912bde165d.zip
sched/uclamp: Make cpu_overutilized() use util_fits_cpu()
So that it is now uclamp aware. This fixes a major problem of busy tasks capped with UCLAMP_MAX keeping the system in overutilized state which disables EAS and leads to wasting energy in the long run. Without this patch running a busy background activity like JIT compilation on Pixel 6 causes the system to be in overutilized state 74.5% of the time. With this patch this goes down to 9.79%. It also fixes another problem when long running tasks that have their UCLAMP_MIN changed while running such that they need to upmigrate to honour the new UCLAMP_MIN value. The upmigration doesn't get triggered because overutilized state never gets set in this state, hence misfit migration never happens at tick in this case until the task wakes up again. Fixes: af24bde8df202 ("sched/uclamp: Add uclamp support to energy_compute()") Signed-off-by: Qais Yousef <qais.yousef@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220804143609.515789-7-qais.yousef@arm.com
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/fair.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index cabbdac97eaa..a0ee3192e5a7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5987,7 +5987,10 @@ static inline void hrtick_update(struct rq *rq)
#ifdef CONFIG_SMP
static inline bool cpu_overutilized(int cpu)
{
- return !fits_capacity(cpu_util_cfs(cpu), capacity_of(cpu));
+ unsigned long rq_util_min = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MIN);
+ unsigned long rq_util_max = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MAX);
+
+ return !util_fits_cpu(cpu_util_cfs(cpu), rq_util_min, rq_util_max, cpu);
}
static inline void update_overutilized_status(struct rq *rq)