summaryrefslogtreecommitdiffstats
path: root/mm
Commit message (Expand)AuthorAgeFilesLines
...
* | | | | mm: let swap use exceptional entriesHugh Dickins2011-08-032-26/+43
* | | | | radix_tree: exceptional entries and indicesHugh Dickins2011-08-031-2/+2
* | | | | fault-injection: add ability to export fault_attr in arbitrary directoryAkinobu Mita2011-08-032-15/+12
|/ / / /
* | | | oom: task->mm == NULL doesn't mean the memory was freedOleg Nesterov2011-08-011-1/+3
* | | | Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/pen...Linus Torvalds2011-07-311-3/+4
|\ \ \ \
| * | | | slab: use NUMA_NO_NODEAndrew Morton2011-07-311-1/+1
| * | | | slab: remove one NR_CPUS dependencyEric Dumazet2011-07-281-2/+3
* | | | | Merge branch 'slub/lockless' of git://git.kernel.org/pub/scm/linux/kernel/git...Linus Torvalds2011-07-301-252/+512
|\ \ \ \ \ | |/ / / / |/| | | |
| * | | | slub: When allocating a new slab also prep the first objectChristoph Lameter2011-07-251-0/+3
| * | | | slub: disable interrupts in cmpxchg_double_slab when falling back to pagelockChristoph Lameter2011-07-181-4/+45
| * | | | slub: Not necessary to check for empty slab on load_freelistChristoph Lameter2011-07-021-3/+2
| * | | | slub: fast release on full slabChristoph Lameter2011-07-021-2/+19
| * | | | slub: Add statistics for the case that the current slab does not match the nodeChristoph Lameter2011-07-021-0/+3
| * | | | slub: Get rid of the another_slab labelChristoph Lameter2011-07-021-6/+5
| * | | | slub: Avoid disabling interrupts in free slowpathChristoph Lameter2011-07-021-11/+5
| * | | | slub: Disable interrupts in free_debug processingChristoph Lameter2011-07-021-4/+10
| * | | | slub: Invert locking and avoid slab lockChristoph Lameter2011-07-021-77/+52
| * | | | slub: Rework allocator fastpathsChristoph Lameter2011-07-021-129/+280
| * | | | slub: Pass kmem_cache struct to lock and freeze slabChristoph Lameter2011-07-021-7/+8
| * | | | slub: explicit list_lock takingChristoph Lameter2011-07-021-40/+49
| * | | | slub: Add cmpxchg_double_slab()Christoph Lameter2011-07-021-5/+60
| * | | | slub: Move page->frozen handling near where the page->freelist handling occursChristoph Lameter2011-07-021-2/+6
| * | | | slub: Do not use frozen page flag but a bit in the page countersChristoph Lameter2011-07-021-6/+6
| * | | | slub: Push irq disable into allocate_slab()Christoph Lameter2011-07-021-10/+13
| | |/ / | |/| |
* | | | atomic: use <linux/atomic.h>Arun Sharma2011-07-264-4/+4
* | | | fail_page_alloc: simplify debugfs initializationAkinobu Mita2011-07-261-31/+16
* | | | failslab: simplify debugfs initializationAkinobu Mita2011-07-261-21/+10
* | | | fault-injection: use debugfs_remove_recursiveAkinobu Mita2011-07-262-2/+2
* | | | cpusets: randomize node rotor used in cpuset_mem_spread_node()Michal Hocko2011-07-261-0/+16
* | | | memcg: get rid of percpu_charge_mutex lockMichal Hocko2011-07-261-10/+2
* | | | memcg: add mem_cgroup_same_or_subtree() helperMichal Hocko2011-07-261-25/+26
* | | | memcg: unify sync and async per-cpu charge cache drainingMichal Hocko2011-07-261-14/+34
* | | | memcg: do not try to drain per-cpu caches without pagesMichal Hocko2011-07-261-6/+7
* | | | memcg: add memory.vmscan_statKAMEZAWA Hiroyuki2011-07-262-11/+200
* | | | memcg: fix behavior of mem_cgroup_resize_limit()Daisuke Nishimura2011-07-261-1/+1
* | | | memcg: fix vmscan count in small memcgsKAMEZAWA Hiroyuki2011-07-261-6/+12
* | | | memcg: change memcg_oom_mutex to spinlockMichal Hocko2011-07-261-11/+11
* | | | memcg: make oom_lock 0 and 1 based rather than counterMichal Hocko2011-07-261-16/+70
* | | | memcg: consolidate memory cgroup lru stat functionsKAMEZAWA Hiroyuki2011-07-262-128/+51
* | | | memcg: export memory cgroup's swappiness with mem_cgroup_swappiness()KAMEZAWA Hiroyuki2011-07-262-21/+17
* | | | Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/wfg...Linus Torvalds2011-07-264-72/+300
|\ \ \ \ | |_|_|/ |/| | |
| * | | mm: properly reflect task dirty limits in dirty_exceeded logicJan Kara2011-07-241-6/+20
| * | | writeback: trace global_dirty_stateWu Fengguang2011-07-091-0/+1
| * | | writeback: introduce max-pause and pass-good dirty limitsWu Fengguang2011-07-091-0/+33
| * | | writeback: introduce smoothed global dirty limitWu Fengguang2011-07-091-2/+72
| * | | writeback: consolidate variable names in balance_dirty_pages()Wu Fengguang2011-07-091-10/+11
| * | | writeback: show bdi write bandwidth in debugfsWu Fengguang2011-07-091-11/+13
| * | | writeback: bdi write bandwidth estimationWu Fengguang2011-07-092-0/+99
| * | | writeback: account per-bdi accumulated written pagesJan Kara2011-07-092-2/+9
| * | | writeback: make writeback_control.nr_to_write straightWu Fengguang2011-07-092-26/+8