summaryrefslogtreecommitdiffstats
path: root/mm/slub.c
Commit message (Expand)AuthorAgeFilesLines
* Merge branch 'slab/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/gi...Linus Torvalds2012-01-111-29/+48
|\
| * Merge branch 'slab/urgent' into slab/for-linusPekka Enberg2012-01-111-1/+3
| |\
| | * slub: add missed accountingShaohua Li2011-12-131-2/+5
| | * slub: Switch per cpu partial page support off for debuggingChristoph Lameter2011-12-131-1/+3
| * | slub: disallow changing cpu_partial from userspace for debug cachesDavid Rientjes2012-01-101-0/+2
| * | slub: Extract get_freelist from __slab_allocChristoph Lameter2011-12-131-25/+32
| * | slub: fix a possible memleak in __slab_alloc()Eric Dumazet2011-12-131-0/+5
| * | slub: add missed accountingShaohua Li2011-11-271-2/+5
| * | Merge branch 'slab/urgent' into slab/nextPekka Enberg2011-11-271-16/+26
| |\|
| * | slub: add taint flag outputting to debug pathsDave Jones2011-11-161-1/+1
* | | slub: min order when debug_guardpage_minorder > 0Stanislaw Gruszka2012-01-101-0/+3
* | | Merge branch 'for-3.3' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/pe...Linus Torvalds2012-01-091-3/+3
|\ \ \
| * | | percpu: Remove irqsafe_cpu_xxx variantsChristoph Lameter2011-12-221-3/+3
| | |/ | |/|
* / | x86: Fix and improve cmpxchg_double{,_local}()Jan Beulich2012-01-041-2/+2
|/ /
* | slub: avoid potential NULL dereference or corruptionEric Dumazet2011-11-241-10/+11
* | slub: use irqsafe_cpu_cmpxchg for put_cpu_partialChristoph Lameter2011-11-241-1/+1
* | slub: move discard_slab out of node lockShaohua Li2011-11-151-4/+12
* | slub: use correct parameter to add a page to partial list tailShaohua Li2011-11-151-1/+2
|/
* lib/string.c: introduce memchr_inv()Akinobu Mita2011-10-311-45/+2
*-. Merge branches 'slab/next' and 'slub/partial' into slab/for-linusPekka Enberg2011-10-261-166/+392
|\ \
| | * slub: Discard slab page when node partial > minimum partial numberAlex Shi2011-09-271-1/+1
| | * slub: correct comments error for per cpu partialAlex Shi2011-09-271-1/+1
| | * slub: Code optimization in get_partial_node()Alex,Shi2011-09-131-4/+2
| | * slub: per cpu cache for partial pagesChristoph Lameter2011-08-191-47/+292
| | * slub: return object pointer from get_partial() / new_slab().Christoph Lameter2011-08-191-60/+73
| | * slub: pass kmem_cache_cpu pointer to get_partial()Christoph Lameter2011-08-191-15/+15
| | * slub: Prepare inuse field in new_slab()Christoph Lameter2011-08-191-3/+2
| | * slub: Remove useless statements in __slab_allocChristoph Lameter2011-08-191-4/+0
| | * slub: free slabs without holding locksChristoph Lameter2011-08-191-13/+13
| * | mm: restrict access to slab files under procfs and sysfsVasiliy Kulikov2011-09-271-3/+4
| * | Merge branch 'slab/urgent' into slab/nextPekka Enberg2011-09-191-10/+12
| |\ \
| | * | slub: explicitly document position of inserting slab to partial listShaohua Li2011-08-271-6/+6
| |/ / |/| |
| * | slub: use print_hex_dumpSebastian Andrzej Siewior2011-07-311-35/+9
* | | slub: add slab with one free object to partial list tailShaohua Li2011-08-271-1/+1
| |/ |/|
* | slub: Fix partial count comparison confusionChristoph Lameter2011-08-091-1/+1
* | slub: fix check_bytes() for slub debuggingAkinobu Mita2011-08-091-1/+1
* | slub: Fix full list corruption if debugging is onChristoph Lameter2011-08-091-2/+4
|/
* Merge branch 'slub/lockless' of git://git.kernel.org/pub/scm/linux/kernel/git...Linus Torvalds2011-07-301-252/+512
|\
| * slub: When allocating a new slab also prep the first objectChristoph Lameter2011-07-251-0/+3
| * slub: disable interrupts in cmpxchg_double_slab when falling back to pagelockChristoph Lameter2011-07-181-4/+45
| * slub: Not necessary to check for empty slab on load_freelistChristoph Lameter2011-07-021-3/+2
| * slub: fast release on full slabChristoph Lameter2011-07-021-2/+19
| * slub: Add statistics for the case that the current slab does not match the nodeChristoph Lameter2011-07-021-0/+3
| * slub: Get rid of the another_slab labelChristoph Lameter2011-07-021-6/+5
| * slub: Avoid disabling interrupts in free slowpathChristoph Lameter2011-07-021-11/+5
| * slub: Disable interrupts in free_debug processingChristoph Lameter2011-07-021-4/+10
| * slub: Invert locking and avoid slab lockChristoph Lameter2011-07-021-77/+52
| * slub: Rework allocator fastpathsChristoph Lameter2011-07-021-129/+280
| * slub: Pass kmem_cache struct to lock and freeze slabChristoph Lameter2011-07-021-7/+8
| * slub: explicit list_lock takingChristoph Lameter2011-07-021-40/+49