summaryrefslogtreecommitdiffstats
path: root/kernel/sched
diff options
context:
space:
mode:
authorMorten Rasmussen <morten.rasmussen@arm.com>2016-07-25 14:34:24 +0100
committerIngo Molnar <mingo@kernel.org>2016-08-18 11:26:55 +0200
commit9ee1cda5ee25c7dd82acf25892e0d229e818f8c7 (patch)
tree0b9a59fce5180742ecf67b5644b9255fc3879374 /kernel/sched
parent3676b13e8524c576825fe1e731e347dba0083888 (diff)
downloadlinux-9ee1cda5ee25c7dd82acf25892e0d229e818f8c7.tar.gz
linux-9ee1cda5ee25c7dd82acf25892e0d229e818f8c7.tar.bz2
linux-9ee1cda5ee25c7dd82acf25892e0d229e818f8c7.zip
sched/core: Enable SD_BALANCE_WAKE for asymmetric capacity systems
A domain with the SD_ASYM_CPUCAPACITY flag set indicate that sched_groups at this level and below do not include CPUs of all capacities available (e.g. group containing little-only or big-only CPUs in big.LITTLE systems). It is therefore necessary to put in more effort in finding an appropriate CPU at task wake-up by enabling balancing at wake-up (SD_BALANCE_WAKE) on all lower (child) levels. Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dietmar.eggemann@arm.com Cc: freedom.tan@mediatek.com Cc: keita.kobayashi.ym@renesas.com Cc: mgalbraith@suse.de Cc: sgurrappadi@nvidia.com Cc: vincent.guittot@linaro.org Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1469453670-2660-8-git-send-email-morten.rasmussen@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/core.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 57394650c6ab..4695df6ed752 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6444,6 +6444,13 @@ sd_init(struct sched_domain_topology_level *tl,
* Convert topological properties into behaviour.
*/
+ if (sd->flags & SD_ASYM_CPUCAPACITY) {
+ struct sched_domain *t = sd;
+
+ for_each_lower_domain(t)
+ t->flags |= SD_BALANCE_WAKE;
+ }
+
if (sd->flags & SD_SHARE_CPUCAPACITY) {
sd->flags |= SD_PREFER_SIBLING;
sd->imbalance_pct = 110;