summaryrefslogtreecommitdiffstats
path: root/kernel/sched
diff options
context:
space:
mode:
authorDavid Vernet <void@manifault.com>2024-02-05 22:39:20 -0600
committerIngo Molnar <mingo@kernel.org>2024-02-28 15:19:24 +0100
commit7f1a7229718d788f26a711374da83adc2689837f (patch)
tree7fe6e23ba0b197f7f0ed6442bf923ef1c7bdf9fd /kernel/sched
parent9dfbc26d27aaf0f5958c5972188f16fe977e5af5 (diff)
downloadlinux-7f1a7229718d788f26a711374da83adc2689837f.tar.gz
linux-7f1a7229718d788f26a711374da83adc2689837f.tar.bz2
linux-7f1a7229718d788f26a711374da83adc2689837f.zip
sched/fair: Do strict inequality check for busiest misfit task group
In update_sd_pick_busiest(), when comparing two sched groups that are both of type group_misfit_task, we currently consider the new group as busier than the current busiest group even if the new group has the same misfit task load as the current busiest group. We can avoid some unnecessary writes if we instead only consider the newest group to be the busiest if it has a higher load than the current busiest. This matches the behavior of other group types where we compare load, such as two groups that are both overloaded. Let's update the group_misfit_task type comparison to also only update the busiest group in the event of strict inequality. Signed-off-by: David Vernet <void@manifault.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Reviewed-by: Valentin Schneider <vschneid@redhat.com> Link: https://lore.kernel.org/r/20240206043921.850302-3-void@manifault.com
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/fair.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 41dda5311770..448520f4fe83 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10032,7 +10032,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
* If we have more than one misfit sg go with the biggest
* misfit.
*/
- if (sgs->group_misfit_task_load < busiest->group_misfit_task_load)
+ if (sgs->group_misfit_task_load <= busiest->group_misfit_task_load)
return false;
break;