summaryrefslogtreecommitdiffstats
Commit message (Expand)AuthorAgeFilesLines
* mm: cleanup ifdef guards for vmem_altmapDan Williams2016-07-282-9/+1
* mm: CONFIG_ZONE_DEVICE stop depending on CONFIG_EXPERTDan Williams2016-07-281-1/+1
* memblock: include <asm/sections.h> instead of <asm-generic/sections.h>Christoph Hellwig2016-07-281-1/+1
* mm, THP: clean up return value of madvise_free_huge_pmdHuang Ying2016-07-282-8/+9
* mm/zsmalloc: use helper to clear page->flags bitGanesh Mahendran2016-07-281-2/+2
* mm/zsmalloc: add __init,__exit attributeGanesh Mahendran2016-07-281-1/+1
* mm/zsmalloc: keep comments consistent with codeGanesh Mahendran2016-07-281-4/+3
* mm/zsmalloc: avoid calculate max objects of zspage twiceGanesh Mahendran2016-07-281-16/+10
* mm/zsmalloc: use class->objs_per_zspage to get num of max objectsGanesh Mahendran2016-07-281-11/+7
* mm/zsmalloc: take obj index back from find_alloced_objGanesh Mahendran2016-07-281-2/+6
* mm/zsmalloc: use obj_index to keep consistent with othersGanesh Mahendran2016-07-281-7/+7
* mm: bail out in shrink_inactive_list()Minchan Kim2016-07-281-0/+27
* mm, vmscan: account for skipped pages as a partial scanMel Gorman2016-07-281-2/+18
* mm: consider whether to decivate based on eligible zones inactive ratioMel Gorman2016-07-281-5/+29
* mm: remove reclaim and compaction retry approximationsMel Gorman2016-07-288-58/+39
* mm, vmscan: remove highmem_file_pagesMel Gorman2016-07-282-25/+4
* mm: add per-zone lru list statMinchan Kim2016-07-285-9/+23
* mm, vmscan: release/reacquire lru_lock on pgdat changeMel Gorman2016-07-281-11/+10
* mm, vmscan: remove redundant check in shrink_zones()Mel Gorman2016-07-281-3/+0
* mm, vmscan: Update all zone LRU sizes before updating memcgMel Gorman2016-07-284-15/+37
* mm: show node_pages_scanned per node, not zoneMinchan Kim2016-07-281-3/+3
* mm, pagevec: release/reacquire lru_lock on pgdat changeMel Gorman2016-07-281-10/+10
* mm, page_alloc: fix dirtyable highmem calculationMinchan Kim2016-07-281-6/+10
* mm, vmstat: remove zone and node double accounting by approximating retriesMel Gorman2016-07-289-50/+84
* mm, vmstat: print node-based stats in zoneinfo fileMel Gorman2016-07-281-0/+24
* mm: vmstat: account per-zone stalls and pages skipped during reclaimMel Gorman2016-07-283-4/+18
* mm: vmstat: replace __count_zone_vm_events with a zone id equivalentMel Gorman2016-07-282-4/+3
* mm: page_alloc: cache the last node whose dirty limit is reachedMel Gorman2016-07-281-2/+11
* mm, page_alloc: remove fair zone allocation policyMel Gorman2016-07-284-83/+2
* mm, vmscan: add classzone information to tracepointsMel Gorman2016-07-282-25/+40
* mm, vmscan: Have kswapd reclaim from all zones if reclaiming and buffer_heads...Mel Gorman2016-07-281-8/+14
* mm, vmscan: avoid passing in `remaining' unnecessarily to prepare_kswapd_sleep()Mel Gorman2016-07-281-8/+4
* mm, vmscan: avoid passing in classzone_idx unnecessarily to compaction_readyMel Gorman2016-07-281-20/+7
* mm, vmscan: avoid passing in classzone_idx unnecessarily to shrink_nodeMel Gorman2016-07-281-11/+9
* mm: convert zone_reclaim to node_reclaimMel Gorman2016-07-288-69/+77
* mm, page_alloc: wake kswapd based on the highest eligible zoneMel Gorman2016-07-281-1/+1
* mm, vmscan: only wakeup kswapd once per node for the requested classzoneMel Gorman2016-07-282-4/+17
* mm: move vmscan writes and file write accounting to the nodeMel Gorman2016-07-285-15/+15
* mm: move most file-based accounting to the nodeMel Gorman2016-07-2824-162/+155
* mm: rename NR_ANON_PAGES to NR_ANON_MAPPEDMel Gorman2016-07-285-8/+8
* mm: move page mapped accounting to the nodeMel Gorman2016-07-288-21/+21
* mm, page_alloc: consider dirtyable memory in terms of nodesMel Gorman2016-07-284-52/+79
* mm, workingset: make working set detection node-awareMel Gorman2016-07-284-44/+26
* mm, memcg: move memcg limit enforcement from zones to nodesMel Gorman2016-07-285-144/+111
* mm, vmscan: make shrink_node decisions more node-centricMel Gorman2016-07-287-44/+54
* mm: vmscan: do not reclaim from kswapd if there is any eligible zoneMel Gorman2016-07-281-32/+27
* mm, vmscan: remove duplicate logic clearing node congestion and dirty stateMel Gorman2016-07-281-12/+12
* mm, vmscan: by default have direct reclaim only shrink once per nodeMel Gorman2016-07-281-8/+14
* mm, vmscan: simplify the logic deciding whether kswapd sleepsMel Gorman2016-07-284-56/+57
* mm, vmscan: remove balance gapMel Gorman2016-07-282-20/+8