summaryrefslogtreecommitdiffstats
path: root/kernel/sched/fair.c
Commit message (Expand)AuthorAgeFilesLines
...
| * sched,lockdep: Employ lock pinningPeter Zijlstra2015-06-191-3/+8
| * Merge branch 'timers/core' into sched/hrtimersThomas Gleixner2015-06-191-48/+28
| |\
* | \ Merge branch 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/ke...Linus Torvalds2015-06-221-48/+28
|\ \ \ | | |/ | |/|
| * | sched,perf: Fix periodic timersPeter Zijlstra2015-05-181-4/+13
| * | sched: Cleanup bandwidth timersPeter Zijlstra2015-04-221-44/+15
| * | sched: core: Use hrtimer_start[_expires]()Thomas Gleixner2015-04-221-1/+1
* | | Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/ker...Linus Torvalds2015-06-221-76/+296
|\ \ \ | | |/ | |/|
| * | sched/numa: Only consider less busy nodes as numa balancing destinationsRik van Riel2015-06-071-2/+28
| * | Revert 095bebf61a46 ("sched/numa: Do not move past the balance point if unbal...Rik van Riel2015-06-071-26/+15
| * | sched/fair: Prevent throttling in early pick_next_task_fair()Ben Segall2015-06-071-11/+14
| * | sched/numa: Reduce conflict between fbq_classify_rq() and migrationRik van Riel2015-05-191-27/+33
| * | sched: Fix function declaration return type mismatchNicholas Mc Guire2015-05-171-3/+3
| * | sched/numa: Document usages of mm->numa_scan_seqJason Low2015-05-081-0/+13
| * | sched, timer: Convert usages of ACCESS_ONCE() in the scheduler to READ_ONCE()...Jason Low2015-05-081-9/+9
| * | sched: Move the loadavg code to a more obvious locationPeter Zijlstra2015-05-081-0/+183
| |/
* / sched, numa: do not hint for NUMA balancing on VM_MIXEDMAP mappingsMel Gorman2015-06-101-1/+1
|/
* Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/ker...Linus Torvalds2015-04-131-163/+262
|\
| * sched: Improve load balancing in the presence of idle CPUsPreeti U Murthy2015-03-271-3/+5
| * sched: Optimize freq invariant accountingPeter Zijlstra2015-03-271-12/+0
| * sched: Move CFS tasks to CPUs with higher capacityVincent Guittot2015-03-271-22/+47
| * sched: Remove unused struct sched_group_capacity::capacity_origVincent Guittot2015-03-271-10/+3
| * sched: Replace capacity_factor by usageVincent Guittot2015-03-271-67/+72
| * sched: Calculate CPU's usage statistic and put it into struct sg_lb_stats::gr...Vincent Guittot2015-03-271-0/+29
| * sched: Add struct rq::cpu_capacity_origVincent Guittot2015-03-271-1/+7
| * sched: Make scale_rt invariant with frequencyVincent Guittot2015-03-271-12/+5
| * sched: Make sched entity usage tracking scale-invariantMorten Rasmussen2015-03-271-7/+14
| * sched: Remove frequency scaling from cpu_capacityVincent Guittot2015-03-271-7/+0
| * sched: Track group sched_entity usage contributionsMorten Rasmussen2015-03-271-0/+3
| * sched: Add sched_avg::utilization_avg_contribVincent Guittot2015-03-271-16/+58
| * sched/numa: Avoid some pointless iterationsJan Beulich2015-02-181-0/+2
| * sched/numa: Do not move past the balance point if unbalancedRik van Riel2015-02-181-15/+26
* | mm: numa: disable change protection for vma(VM_HUGETLB)Naoya Horiguchi2015-04-071-1/+3
* | mm: numa: slow PTE scan rate if migration failures occurMel Gorman2015-03-251-2/+6
|/
* Merge branch 'sched/urgent' into sched/coreIngo Molnar2015-01-301-1/+1
|\
| * sched/fair: Avoid using uninitialized variable in preferred_group_nid()Jan Beulich2015-01-281-1/+1
* | sched/core: Rework rq->clock update skipsPeter Zijlstra2015-01-141-1/+1
* | sched/core: Validate rq_clock*() serializationPeter Zijlstra2015-01-141-1/+1
* | sched/fair: Fix sched_entity::avg::decay_count initializationKirill Tkhai2015-01-141-1/+0
* | sched/fair: Fix the dealing with decay_count in __synchronize_entity_decay()Xunlei Pang2015-01-141-1/+1
|/
* sched/fair: Fix RCU stall upon -ENOMEM in sched_create_group()Tetsuo Handa2015-01-091-0/+4
* sched: Fix odd values in effective_load() calculationsYuyang Du2015-01-091-1/+1
* sched/fair: Fix stale overloaded status in the busiest group finding logicWanpeng Li2014-11-161-1/+3
* sched: Move p->nr_cpus_allowed check to select_task_rq()Wanpeng Li2014-11-161-3/+0
* sched/fair: Kill task_struct::numa_entry and numa_group::task_listKirill Tkhai2014-11-161-5/+0
* Merge branch 'sched/urgent' into sched/core, to pick up fixes before applying...Ingo Molnar2014-11-161-0/+14
|\
| * sched/cputime: Fix clock_nanosleep()/clock_gettime() inconsistencyStanislaw Gruszka2014-11-161-0/+7
| * sched/numa: Avoid selecting oneself as swap targetPeter Zijlstra2014-11-161-0/+7
* | sched: Refactor task_struct to use numa_faults instead of numa_* pointersIulia Manda2014-11-041-53/+57
* | sched: Check if we got a shallowest_idle_cpu before searching for least_loade...Yao Dongdong2014-11-041-1/+1
* | sched/numa: Check all nodes when placing a pseudo-interleaved groupRik van Riel2014-10-281-2/+9