summaryrefslogtreecommitdiffstats
path: root/mm/slub.c
Commit message (Expand)AuthorAgeFilesLines
* mm/slub.c: include swab.hAndrew Morton2021-06-301-0/+1
* Revert "mm, slub: consider rest of partial list if acquire_slab() fails"Linus Torvalds2021-03-171-1/+1
* Revert "mm/slub: fix a memory leak in sysfs_slab_add()"Wang Hai2021-01-301-3/+1
* mm, slub: consider rest of partial list if acquire_slab() failsJann Horn2021-01-231-1/+1
* mm: slub: fix conversion of freelist_corrupted()Eugeniu Rosca2020-09-091-6/+6
* mm/slub: fix stack overruns with SLUB_STATSQian Cai2020-07-091-1/+2
* mm/slub.c: fix corrupted freechain in deactivate_slab()Dongli Zhang2020-07-091-0/+27
* mm/slub: fix a memory leak in sysfs_slab_add()Wang Hai2020-06-201-1/+3
* mm, slub: restore the original intention of prefetch_freepointer()Vlastimil Babka2020-05-021-2/+1
* slub: improve bit diffusion for freelist ptr obfuscationKees Cook2020-04-241-1/+1
* mm, slub: prevent kmalloc_node crashes and memory leaksVlastimil Babka2020-04-021-9/+17
* mm: slub: be more careful about the double cmpxchg of freelistLinus Torvalds2020-04-021-2/+4
* mm: slub: add missing TID bump in kmem_cache_alloc_bulk()Jann Horn2020-03-201-0/+9
* mm/slub: fix a deadlock in show_slab_objects()Qian Cai2019-10-291-2/+11
* slub: make ->cpu_partial unsigned intAlexey Dobriyan2018-10-031-3/+3
* mm/slub.c: add __printf verification to slab_err()Mathieu Malaterre2018-08-031-1/+1
* slub: fix failure when we delete and create a slab cacheMikulas Patocka2018-07-031-1/+6
* kmemcheck: rip it outLevin, Alexander (Sasha Levin)2018-02-221-3/+2
* kmemcheck: remove whats left of NOTRACK flagsLevin, Alexander (Sasha Levin)2018-02-221-2/+0
* kmemcheck: stop using GFP_NOTRACK and SLAB_NOTRACKLevin, Alexander (Sasha Levin)2018-02-221-3/+1
* kmemcheck: remove annotationsLevin, Alexander (Sasha Levin)2018-02-221-20/+0
* slub: fix sysfs duplicate filename creation when slub_debug=OMiles Chen2017-12-141-0/+4
* License cleanup: add SPDX GPL-2.0 license identifier to files with no licenseGreg Kroah-Hartman2017-11-021-0/+1
* mm: treewide: remove GFP_TEMPORARY allocation flagMichal Hocko2017-09-131-1/+1
* treewide: make "nr_cpu_ids" unsignedAlexey Dobriyan2017-09-081-1/+1
* mm/slub.c: constify attribute_group structuresArvind Yadav2017-09-061-1/+1
* mm/slub.c: add a naive detection of double free or corruptionAlexander Popov2017-09-061-0/+4
* mm: add SLUB free list pointer obfuscationKees Cook2017-09-061-5/+37
* slub: tidy up initialization orderingAlexander Potapenko2017-09-061-2/+2
* slub: fix per memcg cache leak on css offlineVladimir Davydov2017-08-181-1/+2
* mm: memcontrol: account slab stats per lruvecJohannes Weiner2017-07-061-2/+2
* mm: vmstat: move slab statistics from zone to node countersJohannes Weiner2017-07-061-2/+2
* mm/slub.c: wrap kmem_cache->cpu_partial in config CONFIG_SLUB_CPU_PARTIALWei Yang2017-07-061-31/+38
* mm/slub.c: wrap cpu_slab->partial in CONFIG_SLUB_CPU_PARTIALWei Yang2017-07-061-7/+11
* mm/slub: reset cpu_slab's pointer in deactivate_slab()Wei Yang2017-07-061-13/+8
* mm/slub.c: remove a redundant assignment in ___slab_alloc()Wei Yang2017-07-061-1/+0
* slub: make sysfs file removal asynchronousTejun Heo2017-06-231-14/+26
* slub/memcg: cure the brainless abuse of sysfs attributesThomas Gleixner2017-06-021-2/+4
* mm: Rename SLAB_DESTROY_BY_RCU to SLAB_TYPESAFE_BY_RCUPaul E. McKenney2017-04-181-6/+6
* slub: make sysfs directories for memcg sub-caches optionalTejun Heo2017-02-221-2/+24
* slab: remove slub sysfs interface files early for empty memcg cachesTejun Heo2017-02-221-2/+23
* slab: remove synchronous synchronize_sched() from memcg cache deactivation pathTejun Heo2017-02-221-4/+8
* slab: introduce __kmemcg_cache_deactivate()Tejun Heo2017-02-221-17/+22
* slab: implement slab_root_caches listTejun Heo2017-02-221-0/+1
* slub: separate out sysfs_slab_release() from sysfs_slab_remove()Tejun Heo2017-02-221-2/+10
* Revert "slub: move synchronize_sched out of slab_mutex on shrink"Tejun Heo2017-02-221-2/+17
* mm/slub: add a dump_stack() to the unexpected GFP checkBorislav Petkov2017-02-221-0/+1
* mm/slub.c: fix random_seq offset destructionSean Rees2017-02-081-0/+4
* mm/slub.c: trace free objects at KERN_INFODaniel Thompson2017-01-241-10/+13
* slub: avoid false-postive warningArnd Bergmann2016-12-121-1/+1