summaryrefslogtreecommitdiffstats
path: root/mm
Commit message (Expand)AuthorAgeFilesLines
...
| * | mm, page_alloc: skip ->waternark_boost for atomic order-0 allocationsCharan Teja Reddy2020-08-071-4/+20
| * | page_alloc: consider highatomic reserve in watermark fastJaewon Kim2020-08-071-30/+36
| * | mm, page_alloc: use unlikely() in task_capc()Vlastimil Babka2020-08-071-3/+2
| * | kasan: remove kasan_unpoison_stack_above_sp_to()Vincenzo Frascino2020-08-071-15/+0
| * | kasan: record and print the free trackWalter Wu2020-08-077-45/+80
| * | rcu: kasan: record and print call_rcu() call stackWalter Wu2020-08-074-7/+55
| * | mm/vmalloc.c: remove BUG() from the find_va_links()Uladzislau Rezki (Sony)2020-08-071-9/+32
| * | mm: vmalloc: remove redundant assignment in unmap_kernel_range_noflush()Mike Rapoport2020-08-071-1/+0
| * | mm/vmalloc: update the header about KVA reworkUladzislau Rezki (Sony)2020-08-071-0/+1
| * | mm/vmalloc: switch to "propagate()" callbackUladzislau Rezki (Sony)2020-08-071-19/+6
| * | mm/vmalloc: simplify augment_tree_propagate_check()Uladzislau Rezki (Sony)2020-08-071-34/+8
| * | mm/vmalloc: simplify merge_or_add_vmap_area()Uladzislau Rezki (Sony)2020-08-071-11/+14
| * | vmalloc: convert to XArrayMatthew Wilcox (Oracle)2020-08-071-29/+11
| * | mm/sparse: cleanup the code surrounding memory_present()Mike Rapoport2020-08-073-29/+13
| * | mm/sparse: only sub-section aligned range would be populatedWei Yang2020-08-071-14/+6
| * | mm/sparse: never partially remove memmap for early sectionWei Yang2020-08-071-3/+7
| * | mm/mremap: start addresses are properly alignedWei Yang2020-08-072-6/+0
| * | mm/mremap: calculate extent in one placeWei Yang2020-08-071-3/+3
| * | mm/mremap: it is sure to have enough space when extent meets requirementWei Yang2020-08-072-11/+6
| * | mm: remove unnecessary wrapper function do_mmap_pgoff()Peter Collingbourne2020-08-074-14/+14
| * | mm: mmap: merge vma after call_mmap() if possibleMiaohe Lin2020-08-071-1/+21
| * | mm/sparsemem: enable vmem_altmap support in vmemmap_alloc_block_buf()Anshuman Khandual2020-08-071-15/+13
| * | mm/sparsemem: enable vmem_altmap support in vmemmap_populate_basepages()Anshuman Khandual2020-08-071-5/+11
| * | mm: adjust vm_committed_as_batch according to vm overcommit policyFeng Tang2020-08-072-6/+57
| * | mm/util.c: make vm_memory_committed() more accurateFeng Tang2020-08-071-1/+6
| * | mm/mmap: optimize a branch judgment in ksys_mmap_pgoff()Zhen Lei2020-08-071-3/+4
| * | mm: move p?d_alloc_track to separate header fileJoerg Roedel2020-08-073-0/+54
| * | mm: move lib/ioremap.c to mm/Mike Rapoport2020-08-072-1/+288
| * | mm: remove unneeded includes of <asm/pgalloc.h>Mike Rapoport2020-08-072-1/+1
| * | mm/memory.c: make remap_pfn_range() reject unaligned addrAlex Zhang2020-08-071-1/+4
| * | mm: remove redundant check non_swap_entry()Ralph Campbell2020-08-071-1/+1
| * | mm/page_counter.c: fix protection usage propagationMichal Koutný2020-08-071-3/+3
| * | mm: memcontrol: don't count limit-setting reclaim as memory pressureJohannes Weiner2020-08-072-7/+10
| * | mm: memcontrol: restore proper dirty throttling when memory.high changesJohannes Weiner2020-08-071-0/+2
| * | memcg, oom: check memcg margin for parallel oomYafang Shao2020-08-071-1/+7
| * | mm, memcg: decouple e{low,min} state mutations from protection checksChris Down2020-08-072-34/+11
| * | mm, memcg: avoid stale protection values when cgroup is above protectionYafang Shao2020-08-072-1/+10
| * | mm, memcg: unify reclaim retry limits with page allocatorChris Down2020-08-071-9/+6
| * | mm, memcg: reclaim more aggressively before high allocator throttlingChris Down2020-08-071-5/+37
| * | mm: memcontrol: avoid workload stalls when lowering memory.highRoman Gushchin2020-08-071-2/+2
| * | mm: slab: rename (un)charge_slab_page() to (un)account_slab_page()Roman Gushchin2020-08-073-8/+8
| * | mm: memcg/slab: remove unused argument by charge_slab_page()Roman Gushchin2020-08-073-4/+3
| * | mm: memcontrol: account kernel stack per nodeShakeel Butt2020-08-073-13/+13
| * | mm: memcg/slab: use a single set of kmem_caches for all allocationsRoman Gushchin2020-08-075-575/+78
| * | mm: memcg/slab: remove redundant check in memcg_accumulate_slabinfo()Roman Gushchin2020-08-071-3/+0
| * | mm: memcg/slab: deprecate slab_root_cachesRoman Gushchin2020-08-074-48/+8
| * | mm: memcg/slab: remove memcg_kmem_get_cache()Roman Gushchin2020-08-073-27/+11
| * | mm: memcg/slab: simplify memcg cache creationRoman Gushchin2020-08-073-57/+15
| * | mm: memcg/slab: use a single set of kmem_caches for all accounted allocationsRoman Gushchin2020-08-075-690/+132
| * | mm: memcg/slab: move memcg_kmem_bypass() to memcontrol.hRoman Gushchin2020-08-071-12/+0