summaryrefslogtreecommitdiffstats
path: root/drivers/hwmon/tmp421.c
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2017-05-11 17:57:24 +0200
committerIngo Molnar <mingo@kernel.org>2017-09-29 19:35:16 +0200
commit144d8487bc6e9b741895709cb46d4e19b748a725 (patch)
tree00e02dd5dfbfa99e3be67ed6e2015bf60b7bed2f /drivers/hwmon/tmp421.c
parent1ea6c46a23f1213d1972bfae220db5c165e27bba (diff)
downloadlinux-144d8487bc6e9b741895709cb46d4e19b748a725.tar.gz
linux-144d8487bc6e9b741895709cb46d4e19b748a725.tar.bz2
linux-144d8487bc6e9b741895709cb46d4e19b748a725.zip
sched/fair: Implement synchonous PELT detach on load-balance migrate
Vincent wondered why his self migrating task had a roughly 50% dip in load_avg when landing on the new CPU. This is because we uncondionally take the asynchronous detatch_entity route, which can lead to the attach on the new CPU still seeing the old CPU's contribution to tg->load_avg, effectively halving the new CPU's shares. While in general this is something we have to live with, there is the special case of runnable migration where we can do better. Tested-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'drivers/hwmon/tmp421.c')
0 files changed, 0 insertions, 0 deletions