summaryrefslogtreecommitdiffstats
path: root/mm
Commit message (Expand)AuthorAgeFilesLines
* headers: untangle kmemleak.h from mm.hRandy Dunlap2018-04-052-0/+2
* mm/page_isolation.c: make start_isolate_page_range() fail if already isolatedMike Kravetz2018-04-052-5/+21
* mm, oom: remove 3% bonus for CAP_SYS_ADMIN processesDavid Rientjes2018-04-051-7/+0
* mm, page_alloc: wakeup kcompactd even if kswapd cannot free more memoryDavid Rientjes2018-04-052-15/+31
* mm: make counting of list_lru_one::nr_items locklessKirill Tkhai2018-04-051-22/+45
* mm/swap_state.c: make bool enable_vma_readahead and swap_vma_readahead() staticColin Ian King2018-04-051-3/+3
* mm: kernel-doc: add missing parameter descriptionsMike Rapoport2018-04-058-0/+30
* mm/swap.c: remove @cold parameter description for release_pages()Mike Rapoport2018-04-051-1/+0
* mm/nommu: remove description of alloc_vm_areaMike Rapoport2018-04-051-12/+0
* zsmalloc: introduce zs_huge_class_size()Sergey Senozhatsky2018-04-051-0/+41
* mm: fix races between swapoff and flush dcacheHuang Ying2018-04-051-0/+10
* mm, hugetlbfs: introduce ->pagesize() to vm_operations_structDan Williams2018-04-051-8/+11
* mm, powerpc: use vma_kernel_pagesize() in vma_mmu_pagesize()Dan Williams2018-04-051-5/+3
* mm/gup.c: fix coding style issues.Mario Leinweber2018-04-051-2/+2
* mm/free_pcppages_bulk: prefetch buddy while not holding lockAaron Lu2018-04-051-0/+22
* mm/free_pcppages_bulk: do not hold lock when picking pages to freeAaron Lu2018-04-051-16/+23
* mm/free_pcppages_bulk: update pcp->count insideAaron Lu2018-04-051-7/+3
* mm, compaction: drain pcps for zone when kcompactd failsDavid Rientjes2018-04-051-0/+8
* mm: make should_failslab always available for fault injectionHoward McLauchlan2018-04-052-1/+9
* mm/page_poison.c: make early_page_poison_param() __initDou Liyang2018-04-051-1/+1
* mm/page_owner.c: make early_page_owner_param() __initDou Liyang2018-04-051-1/+1
* mm/kmemleak.c: make kmemleak_boot_config() __initDou Liyang2018-04-051-1/+1
* mm: swap: unify cluster-based and vma-based swap readaheadMinchan Kim2018-04-053-19/+45
* mm: swap: clean up swap readaheadMinchan Kim2018-04-052-63/+59
* mm,vmscan: don't pretend forward progress upon shrinker_rwsem contentionTetsuo Handa2018-04-051-9/+1
* z3fold: limit use of stale list for allocationVitaly Wool2018-04-051-16/+19
* mm/huge_memory.c: reorder operations in __split_huge_page_tail()Konstantin Khlebnikov2018-04-051-21/+15
* mm: fix races between address_space dereference and free in page_evicatableHuang Ying2018-04-051-1/+7
* mm: reuse DEFINE_SHOW_ATTRIBUTE() macroAndy Shevchenko2018-04-054-49/+5
* mm, page_alloc: move mirrored_kernelcore to __meminitdataDavid Rientjes2018-04-051-9/+9
* mm, page_alloc: extend kernelcore and movablecore for percentDavid Rientjes2018-04-051-8/+35
* mm: hwpoison: disable memory error handling on 1GB hugepageNaoya Horiguchi2018-04-051-0/+16
* mm/memory_hotplug: optimize memory hotplugPavel Tatashin2018-04-053-38/+25
* mm/memory_hotplug: don't read nid from struct page during hotplugPavel Tatashin2018-04-051-1/+1
* mm: uninitialized struct page poisoning sanity checkingPavel Tatashin2018-04-051-1/+1
* mm/memory_hotplug: enforce block size aligned range checkPavel Tatashin2018-04-051-7/+8
* mm: thp: fix potential clearing to referenced flag in page_idle_clear_pte_ref...Yang Shi2018-04-051-4/+8
* mm: initialize pages on demand during bootPavel Tatashin2018-04-052-62/+144
* mm: disable interrupts while initializing deferred pagesPavel Tatashin2018-04-051-8/+11
* mm/swap_slots.c: use conditional compilationRandy Dunlap2018-04-052-6/+2
* mm/migrate: rename migration reason MR_CMA to MR_CONTIG_RANGEAnshuman Khandual2018-04-051-1/+1
* mm: always print RLIMIT_DATA warningDavid Woodhouse2018-04-051-6/+8
* mm/ksm.c: make stable_node_dup() staticColin Ian King2018-04-051-4/+4
* slab, slub: skip unnecessary kasan_cache_shutdown()Shakeel Butt2018-04-054-1/+26
* mm/slab_common.c: remove test if cache name is accessibleMikulas Patocka2018-04-051-19/+0
* slab, slub: remove size disparity on debug kernelShakeel Butt2018-04-051-5/+4
* slab: use 32-bit arithmetic in freelist_randomize()Alexey Dobriyan2018-04-051-2/+2
* slub: make size_from_object() return unsigned intAlexey Dobriyan2018-04-051-1/+1
* slub: make struct kmem_cache_order_objects::x unsigned intAlexey Dobriyan2018-04-051-35/+39
* slub: make slab_index() return unsigned intAlexey Dobriyan2018-04-051-1/+1