summaryrefslogtreecommitdiffstats
path: root/kernel/sched/fair.c
Commit message (Expand)AuthorAgeFilesLines
* treewide: Remove old email addressPeter Zijlstra2015-11-231-1/+1
* sched/numa: Fix math underflow in task_tick_numa()Rik van Riel2015-11-091-1/+1
* Merge branch 'sched/urgent' into sched/core, to pick up fixes and resolve con...Ingo Molnar2015-10-201-4/+5
|\
| * sched/fair: Update task group's load_avg after task migrationYuyang Du2015-10-201-2/+3
| * sched/fair: Fix overly small weight for interactive group entitiesYuyang Du2015-10-201-2/+2
* | sched/core: Remove a parameter in the migrate_task_rq() functionxiaofeng.yan2015-10-061-1/+1
* | sched/numa: Fix task_tick_fair() from disabling numa_balancingSrikar Dronamraju2015-10-061-1/+1
* | sched/fair: Remove unnecessary parameter for group_classify()Leo Yan2015-09-181-5/+5
* | sched/fair: Polish comments for LOAD_AVG_MAXLeo Yan2015-09-181-2/+3
* | sched/numa: Limit the amount of virtual memory scanned in task_numa_work()Rik van Riel2015-09-181-6/+12
* | sched/fair: Optimize per entity utilization trackingPeter Zijlstra2015-09-131-7/+10
* | sched/fair: Defer calling scaling functionsDietmar Eggemann2015-09-131-2/+4
* | sched/fair: Optimize __update_load_avg()Peter Zijlstra2015-09-131-1/+1
* | sched/fair: Rename scale() to cap_scale()Peter Zijlstra2015-09-131-7/+7
* | sched/fair: Get rid of scaling utilization by capacity_origDietmar Eggemann2015-09-131-16/+22
* | sched/fair: Name utilization related data and functions consistentlyDietmar Eggemann2015-09-131-18/+19
* | sched/fair: Make utilization tracking CPU scale-invariantDietmar Eggemann2015-09-131-3/+4
* | sched/fair: Convert arch_scale_cpu_capacity() from weak function to #defineMorten Rasmussen2015-09-131-21/+1
* | sched/fair: Make load tracking frequency scale-invariantDietmar Eggemann2015-09-131-10/+17
* | sched/numa: Convert sched_numa_balancing to a static_branchSrikar Dronamraju2015-09-131-3/+3
* | sched/numa: Disable sched_numa_balancing on UMA systemsSrikar Dronamraju2015-09-131-2/+2
* | sched/numa: Rename numabalancing_enabled to sched_numa_balancingSrikar Dronamraju2015-09-131-2/+2
* | sched/fair: Fix nohz.next_balance updateVincent Guittot2015-09-131-4/+30
* | sched/core: Remove unused argument from sched_class::task_move_groupPeter Zijlstra2015-09-131-1/+1
* | sched/fair: Unify switched_{from,to}_fair() and task_move_group_fair()Byungchul Park2015-09-131-77/+52
* | sched/fair: Make the entity load aging on attaching tunablePeter Zijlstra2015-09-131-0/+4
* | sched/fair: Fix switched_to_fair()'s per entity load trackingByungchul Park2015-09-131-0/+23
* | sched/fair: Have task_move_group_fair() also detach entity load from the old ...Byungchul Park2015-09-131-1/+5
* | sched/fair: Have task_move_group_fair() unconditionally add the entity load t...Byungchul Park2015-09-131-5/+4
* | sched/fair: Factor out the {at,de}taching of the per entity load {to,from} th...Byungchul Park2015-09-131-39/+38
|/
* sched: Make sched_class::set_cpus_allowed() unconditionalPeter Zijlstra2015-08-121-0/+1
* sched: Ensure a task has a non-normalized vruntime when returning back to CFSByungchul Park2015-08-121-2/+17
* sched/fair: Clean up load average referencesYuyang Du2015-08-031-15/+29
* sched/fair: Provide runnable_load_avg back to cfs_rqYuyang Du2015-08-031-10/+45
* sched/fair: Remove task and group entity load when they are deadYuyang Du2015-08-031-1/+10
* sched/fair: Init cfs_rq's sched_entity load averageYuyang Du2015-08-031-5/+6
* sched/fair: Implement update_blocked_averages() for CONFIG_FAIR_GROUP_SCHED=nVincent Guittot2015-08-031-0/+8
* sched/fair: Rewrite runnable load and utilization average trackingYuyang Du2015-08-031-425/+205
* sched/fair: Remove rq's runnable avgYuyang Du2015-08-031-21/+4
* sched/fair: Beef up wake_wide()Mike Galbraith2015-08-031-34/+33
* sched/fair: Avoid pulling all tasks in idle balancingYuyang Du2015-08-031-0/+7
* sched/fair: Fix a comment reflecting function name changeByungchul Park2015-07-071-1/+1
* sched/fair: Clean up the __sched_period() codeBoqun Feng2015-07-071-9/+4
* sched/numa: Consider 'imbalance_pct' when comparing loads in numa_has_capacity()Srikar Dronamraju2015-07-071-2/+3
* sched/numa: Prefer NUMA hotness over cache hotnessSrikar Dronamraju2015-07-071-64/+25
* sched/numa: Check sched_feat(NUMA) in migrate_improves_locality()bsegall@google.com2015-07-071-2/+2
* sched/fair: Test list head instead of list entry in throttle_cfs_rq()Cong Wang2015-07-061-1/+1
* Merge branch 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/k...Linus Torvalds2015-07-041-1/+21
|\
| * sched/numa: Fix numa balancing stats in /proc/pid/schedSrikar Dronamraju2015-07-041-1/+21
* | Merge branch 'sched-hrtimers-for-linus' of git://git.kernel.org/pub/scm/linux...Linus Torvalds2015-06-241-3/+8
|\ \ | |/ |/|