diff options
author | Vincent Guittot <vincent.guittot@linaro.org> | 2020-03-06 09:42:08 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2020-03-06 12:57:25 +0100 |
commit | 5ab297bab984310267734dfbcc8104566658ebef (patch) | |
tree | 51fbe5f5090fb7681cfb4d12612344afd944d30b /kernel/sched | |
parent | 6212437f0f6043e825e021e4afc5cd63e248a2b4 (diff) | |
download | linux-5ab297bab984310267734dfbcc8104566658ebef.tar.gz linux-5ab297bab984310267734dfbcc8104566658ebef.tar.bz2 linux-5ab297bab984310267734dfbcc8104566658ebef.zip |
sched/fair: Fix reordering of enqueue/dequeue_task_fair()
Even when a cgroup is throttled, the group se of a child cgroup can still
be enqueued and its gse->on_rq stays true. When a task is enqueued on such
child, we still have to update the load_avg and increase
h_nr_running of the throttled cfs. Nevertheless, the 1st
for_each_sched_entity() loop is skipped because of gse->on_rq == true and the
2nd loop because the cfs is throttled whereas we have to update both
load_avg with the old h_nr_running and increase h_nr_running in such case.
The same sequence can happen during dequeue when se moves to parent before
breaking in the 1st loop.
Note that the update of load_avg will effectively happen only once in order
to sync up to the throttled time. Next call for updating load_avg will stop
early because the clock stays unchanged.
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Fixes: 6d4d22468dae ("sched/fair: Reorder enqueue/dequeue_task_fair path")
Link: https://lkml.kernel.org/r/20200306084208.12583-1-vincent.guittot@linaro.org
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/fair.c | 17 |
1 files changed, 9 insertions, 8 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 54bd6280676e..1dea8554ead0 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5460,16 +5460,16 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) for_each_sched_entity(se) { cfs_rq = cfs_rq_of(se); - /* end evaluation on encountering a throttled cfs_rq */ - if (cfs_rq_throttled(cfs_rq)) - goto enqueue_throttle; - update_load_avg(cfs_rq, se, UPDATE_TG); se_update_runnable(se); update_cfs_group(se); cfs_rq->h_nr_running++; cfs_rq->idle_h_nr_running += idle_h_nr_running; + + /* end evaluation on encountering a throttled cfs_rq */ + if (cfs_rq_throttled(cfs_rq)) + goto enqueue_throttle; } enqueue_throttle: @@ -5558,16 +5558,17 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) for_each_sched_entity(se) { cfs_rq = cfs_rq_of(se); - /* end evaluation on encountering a throttled cfs_rq */ - if (cfs_rq_throttled(cfs_rq)) - goto dequeue_throttle; - update_load_avg(cfs_rq, se, UPDATE_TG); se_update_runnable(se); update_cfs_group(se); cfs_rq->h_nr_running--; cfs_rq->idle_h_nr_running -= idle_h_nr_running; + + /* end evaluation on encountering a throttled cfs_rq */ + if (cfs_rq_throttled(cfs_rq)) + goto dequeue_throttle; + } dequeue_throttle: |