| Commit message (Expand) | Author | Age | Files | Lines |
* | mm/zsmalloc: keep comments consistent with code | Ganesh Mahendran | 2016-07-28 | 1 | -4/+3 |
* | mm/zsmalloc: avoid calculate max objects of zspage twice | Ganesh Mahendran | 2016-07-28 | 1 | -16/+10 |
* | mm/zsmalloc: use class->objs_per_zspage to get num of max objects | Ganesh Mahendran | 2016-07-28 | 1 | -11/+7 |
* | mm/zsmalloc: take obj index back from find_alloced_obj | Ganesh Mahendran | 2016-07-28 | 1 | -2/+6 |
* | mm/zsmalloc: use obj_index to keep consistent with others | Ganesh Mahendran | 2016-07-28 | 1 | -7/+7 |
* | mm: bail out in shrink_inactive_list() | Minchan Kim | 2016-07-28 | 1 | -0/+27 |
* | mm, vmscan: account for skipped pages as a partial scan | Mel Gorman | 2016-07-28 | 1 | -2/+18 |
* | mm: consider whether to decivate based on eligible zones inactive ratio | Mel Gorman | 2016-07-28 | 1 | -5/+29 |
* | mm: remove reclaim and compaction retry approximations | Mel Gorman | 2016-07-28 | 6 | -58/+37 |
* | mm, vmscan: remove highmem_file_pages | Mel Gorman | 2016-07-28 | 1 | -8/+4 |
* | mm: add per-zone lru list stat | Minchan Kim | 2016-07-28 | 3 | -9/+15 |
* | mm, vmscan: release/reacquire lru_lock on pgdat change | Mel Gorman | 2016-07-28 | 1 | -11/+10 |
* | mm, vmscan: remove redundant check in shrink_zones() | Mel Gorman | 2016-07-28 | 1 | -3/+0 |
* | mm, vmscan: Update all zone LRU sizes before updating memcg | Mel Gorman | 2016-07-28 | 2 | -11/+34 |
* | mm: show node_pages_scanned per node, not zone | Minchan Kim | 2016-07-28 | 1 | -3/+3 |
* | mm, pagevec: release/reacquire lru_lock on pgdat change | Mel Gorman | 2016-07-28 | 1 | -10/+10 |
* | mm, page_alloc: fix dirtyable highmem calculation | Minchan Kim | 2016-07-28 | 1 | -6/+10 |
* | mm, vmstat: remove zone and node double accounting by approximating retries | Mel Gorman | 2016-07-28 | 6 | -42/+67 |
* | mm, vmstat: print node-based stats in zoneinfo file | Mel Gorman | 2016-07-28 | 1 | -0/+24 |
* | mm: vmstat: account per-zone stalls and pages skipped during reclaim | Mel Gorman | 2016-07-28 | 2 | -3/+15 |
* | mm: vmstat: replace __count_zone_vm_events with a zone id equivalent | Mel Gorman | 2016-07-28 | 1 | -1/+1 |
* | mm: page_alloc: cache the last node whose dirty limit is reached | Mel Gorman | 2016-07-28 | 1 | -2/+11 |
* | mm, page_alloc: remove fair zone allocation policy | Mel Gorman | 2016-07-28 | 3 | -78/+2 |
* | mm, vmscan: add classzone information to tracepoints | Mel Gorman | 2016-07-28 | 1 | -5/+9 |
* | mm, vmscan: Have kswapd reclaim from all zones if reclaiming and buffer_heads... | Mel Gorman | 2016-07-28 | 1 | -8/+14 |
* | mm, vmscan: avoid passing in `remaining' unnecessarily to prepare_kswapd_sleep() | Mel Gorman | 2016-07-28 | 1 | -8/+4 |
* | mm, vmscan: avoid passing in classzone_idx unnecessarily to compaction_ready | Mel Gorman | 2016-07-28 | 1 | -20/+7 |
* | mm, vmscan: avoid passing in classzone_idx unnecessarily to shrink_node | Mel Gorman | 2016-07-28 | 1 | -11/+9 |
* | mm: convert zone_reclaim to node_reclaim | Mel Gorman | 2016-07-28 | 4 | -53/+60 |
* | mm, page_alloc: wake kswapd based on the highest eligible zone | Mel Gorman | 2016-07-28 | 1 | -1/+1 |
* | mm, vmscan: only wakeup kswapd once per node for the requested classzone | Mel Gorman | 2016-07-28 | 2 | -4/+17 |
* | mm: move vmscan writes and file write accounting to the node | Mel Gorman | 2016-07-28 | 3 | -9/+9 |
* | mm: move most file-based accounting to the node | Mel Gorman | 2016-07-28 | 12 | -117/+107 |
* | mm: rename NR_ANON_PAGES to NR_ANON_MAPPED | Mel Gorman | 2016-07-28 | 2 | -5/+5 |
* | mm: move page mapped accounting to the node | Mel Gorman | 2016-07-28 | 4 | -13/+13 |
* | mm, page_alloc: consider dirtyable memory in terms of nodes | Mel Gorman | 2016-07-28 | 2 | -45/+72 |
* | mm, workingset: make working set detection node-aware | Mel Gorman | 2016-07-28 | 2 | -40/+23 |
* | mm, memcg: move memcg limit enforcement from zones to nodes | Mel Gorman | 2016-07-28 | 3 | -120/+95 |
* | mm, vmscan: make shrink_node decisions more node-centric | Mel Gorman | 2016-07-28 | 4 | -32/+41 |
* | mm: vmscan: do not reclaim from kswapd if there is any eligible zone | Mel Gorman | 2016-07-28 | 1 | -32/+27 |
* | mm, vmscan: remove duplicate logic clearing node congestion and dirty state | Mel Gorman | 2016-07-28 | 1 | -12/+12 |
* | mm, vmscan: by default have direct reclaim only shrink once per node | Mel Gorman | 2016-07-28 | 1 | -8/+14 |
* | mm, vmscan: simplify the logic deciding whether kswapd sleeps | Mel Gorman | 2016-07-28 | 3 | -54/+54 |
* | mm, vmscan: remove balance gap | Mel Gorman | 2016-07-28 | 1 | -11/+8 |
* | mm, vmscan: make kswapd reclaim in terms of nodes | Mel Gorman | 2016-07-28 | 1 | -191/+101 |
* | mm, vmscan: have kswapd only scan based on the highest requested zone | Mel Gorman | 2016-07-28 | 1 | -5/+2 |
* | mm, vmscan: begin reclaiming pages on a per-node basis | Mel Gorman | 2016-07-28 | 1 | -24/+55 |
* | mm, vmscan: move LRU lists to node | Mel Gorman | 2016-07-28 | 17 | -222/+270 |
* | mm, vmscan: move lru_lock to the node | Mel Gorman | 2016-07-28 | 10 | -62/+62 |
* | mm, vmstat: add infrastructure for per-node vmstats | Mel Gorman | 2016-07-28 | 3 | -29/+285 |