diff options
author | Suresh Siddha <suresh.b.siddha@intel.com> | 2007-09-05 14:32:48 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2007-09-05 14:32:48 +0200 |
commit | 7fd0d2dde929ead79901e389e70dbfb3c6c06986 (patch) | |
tree | 577c4626e1e6f1de79e41deaeea6699261c873aa /kernel | |
parent | b21010ed6498391c0f359f2a89c907533fe07fec (diff) | |
download | linux-stable-7fd0d2dde929ead79901e389e70dbfb3c6c06986.tar.gz linux-stable-7fd0d2dde929ead79901e389e70dbfb3c6c06986.tar.bz2 linux-stable-7fd0d2dde929ead79901e389e70dbfb3c6c06986.zip |
sched: fix MC/HT scheduler optimization, without breaking the FUZZ logic.
First fix the check
if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task)
with this
if (*imbalance < busiest_load_per_task)
As the current check is always false for nice 0 tasks (as
SCHED_LOAD_SCALE_FUZZ is same as busiest_load_per_task for nice 0
tasks).
With the above change, imbalance was getting reset to 0 in the corner
case condition, making the FUZZ logic fail. Fix it by not corrupting the
imbalance and change the imbalance, only when it finds that the HT/MC
optimization is needed.
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched.c | 8 |
1 files changed, 3 insertions, 5 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index b533d6db78aa..c8759ec6d8a9 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -2512,7 +2512,7 @@ group_next: * a think about bumping its value to force at least one task to be * moved */ - if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task) { + if (*imbalance < busiest_load_per_task) { unsigned long tmp, pwr_now, pwr_move; unsigned int imbn; @@ -2564,10 +2564,8 @@ small_imbalance: pwr_move /= SCHED_LOAD_SCALE; /* Move if we gain throughput */ - if (pwr_move <= pwr_now) - goto out_balanced; - - *imbalance = busiest_load_per_task; + if (pwr_move > pwr_now) + *imbalance = busiest_load_per_task; } return busiest; |