From 1662867a9b2574bfdb9d4e97186aa131218d7210 Mon Sep 17 00:00:00 2001 From: Rik van Riel Date: Sun, 8 Jun 2014 16:55:57 -0400 Subject: numa,sched: fix load_to_imbalanced logic inversion This function is supposed to return true if the new load imbalance is worse than the old one. It didn't. I can only hope brown paper bags are in style. Now things converge much better on both the 4 node and 8 node systems. I am not sure why this did not seem to impact specjbb performance on the 4 node system, which is the system I have full-time access to. This bug was introduced recently, with commit e63da03639cc ("sched/numa: Allow task switch if load imbalance improves") Signed-off-by: Rik van Riel Signed-off-by: Linus Torvalds --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'kernel') diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 17de1956ddad..9855e87d671a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1120,7 +1120,7 @@ static bool load_too_imbalanced(long orig_src_load, long orig_dst_load, old_imb = orig_dst_load * 100 - orig_src_load * env->imbalance_pct; /* Would this change make things worse? */ - return (old_imb > imb); + return (imb > old_imb); } /* -- cgit v1.2.3