summaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorAubrey Li <aubrey.li@intel.com>2020-03-26 13:42:29 +0800
committerIngo Molnar <mingo@kernel.org>2020-04-08 11:35:20 +0200
commit111688ca1c4a43a7e482f5401f82c46326b8ed49 (patch)
tree9ff221c39692293560d0b9def0cef07c081abd98 /kernel
parent26a8b12747c975b33b4a82d62e4a307e1c07f31b (diff)
downloadlinux-stable-111688ca1c4a43a7e482f5401f82c46326b8ed49.tar.gz
linux-stable-111688ca1c4a43a7e482f5401f82c46326b8ed49.tar.bz2
linux-stable-111688ca1c4a43a7e482f5401f82c46326b8ed49.zip
sched/fair: Fix negative imbalance in imbalance calculation
A negative imbalance value was observed after imbalance calculation, this happens when the local sched group type is group_fully_busy, and the average load of local group is greater than the selected busiest group. Fix this problem by comparing the average load of the local and busiest group before imbalance calculation formula. Suggested-by: Vincent Guittot <vincent.guittot@linaro.org> Reviewed-by: Phil Auld <pauld@redhat.com> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Acked-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Aubrey Li <aubrey.li@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/1585201349-70192-1-git-send-email-aubrey.li@intel.com
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched/fair.c8
1 files changed, 8 insertions, 0 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 95cbd9e7958d..02f323b85b6d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9036,6 +9036,14 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
sds->avg_load = (sds->total_load * SCHED_CAPACITY_SCALE) /
sds->total_capacity;
+ /*
+ * If the local group is more loaded than the selected
+ * busiest group don't try to pull any tasks.
+ */
+ if (local->avg_load >= busiest->avg_load) {
+ env->imbalance = 0;
+ return;
+ }
}
/*