summaryrefslogtreecommitdiffstats
path: root/include/linux/slub_def.h
Commit message (Expand)AuthorAgeFilesLines
* Merge branch 'tracing/core-v2' into tracing-for-linusIngo Molnar2009-04-021-3/+50
|\
| * Merge branch 'for-ingo' of git://git.kernel.org/pub/scm/linux/kernel/git/penb...Ingo Molnar2009-02-201-3/+16
| |\
| | * SLUB: Introduce and use SLUB_MAX_SIZE and SLUB_PAGE_SHIFT constantsChristoph Lameter2009-02-201-3/+16
| * | tracing/kmemtrace: normalize the raw tracer event to the unified tracing APIFrederic Weisbecker2008-12-301-1/+1
| * | kmemtrace: SLUB hooks.Eduard - Gabriel Munteanu2008-12-291-3/+50
| |/
| |
| \
*-. \ Merge branches 'topic/slob/cleanups', 'topic/slob/fixes', 'topic/slub/core', ...Pekka Enberg2009-03-241-4/+17
|\ \ \ | |_|/ |/| |
| | * SLUB: Do not pass 8k objects through to the page allocatorPekka Enberg2009-02-201-2/+2
| | * SLUB: Introduce and use SLUB_MAX_SIZE and SLUB_PAGE_SHIFT constantsChristoph Lameter2009-02-201-3/+16
| |/ |/|
| * slub: move min_partial to struct kmem_cacheDavid Rientjes2009-02-231-1/+1
|/
* SLUB: dynamic per-cache MIN_PARTIALPekka Enberg2008-08-051-0/+1
* SL*B: drop kmem cache argument from constructorAlexey Dobriyan2008-07-261-1/+1
* Christoph has movedChristoph Lameter2008-07-041-1/+1
* slub: Do not use 192 byte sized cache if minimum alignment is 128 byteChristoph Lameter2008-07-031-0/+2
* slub: Fallback to minimal order during slab page allocationChristoph Lameter2008-04-271-0/+2
* slub: Update statistics handling for variable order slabsChristoph Lameter2008-04-271-0/+2
* slub: Add kmem_cache_order_objects structChristoph Lameter2008-04-271-2/+10
* slub: No need for per node slab counters if !SLUB_DEBUGChristoph Lameter2008-04-141-1/+1
* slub: Fix up commentsChristoph Lameter2008-03-031-2/+2
* slub: Support 4k kmallocs again to compensate for page allocator slownessChristoph Lameter2008-02-141-3/+3
* slub: Determine gfpflags once and not every time a slab is allocatedChristoph Lameter2008-02-141-0/+1
* slub: kmalloc page allocator pass-through cleanupPekka Enberg2008-02-141-2/+6
* SLUB: Support for performance statisticsChristoph Lameter2008-02-071-0/+23
* Explain kmem_cache_cpu fieldsChristoph Lameter2008-02-041-5/+5
* SLUB: rename defrag to remote_node_defrag_ratioChristoph Lameter2008-02-041-1/+4
* Unify /proc/slabinfo configurationLinus Torvalds2008-01-021-2/+0
* slub: provide /proc/slabinfoPekka J Enberg2008-01-011-0/+2
* Slab API: remove useless ctor parameter and reorder parametersChristoph Lameter2007-10-171-1/+1
* SLUB: Optimize cacheline use for zeroingChristoph Lameter2007-10-161-0/+1
* SLUB: Place kmem_cache_cpu structures in a NUMA aware wayChristoph Lameter2007-10-161-3/+6
* SLUB: Move page->offset to kmem_cache_cpu->offsetChristoph Lameter2007-10-161-0/+1
* SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slabChristoph Lameter2007-10-161-1/+8
* SLUB: direct pass through of page size or higher kmalloc requestsChristoph Lameter2007-10-161-33/+24
* SLUB: Force inlining for functions in slub_def.hChristoph Lameter2007-08-311-4/+4
* fix gfp_t annotations for slubAl Viro2007-07-201-1/+1
* Slab allocators: Cleanup zeroing allocationsChristoph Lameter2007-07-171-13/+0
* SLUB: add some more inlines and #ifdef CONFIG_SLUB_DEBUGChristoph Lameter2007-07-171-0/+4
* Slab allocators: consistent ZERO_SIZE_PTR support and NULL result semanticsChristoph Lameter2007-07-171-12/+0
* slob: initial NUMA supportPaul Mundt2007-07-161-1/+5
* SLUB: minimum alignment fixesChristoph Lameter2007-06-161-2/+11
* SLUB: return ZERO_SIZE_PTR for kmalloc(0)Christoph Lameter2007-06-081-8/+17
* Slab allocators: define common size limitationsChristoph Lameter2007-05-171-17/+2
* slub: fix handling of oversized slabsAndrew Morton2007-05-171-1/+6
* Slab allocators: Drop support for destructorsChristoph Lameter2007-05-171-1/+0
* SLUB: It is legit to allocate a slab of the maximum permitted sizeChristoph Lameter2007-05-161-1/+1
* SLUB: CONFIG_LARGE_ALLOCS must consider MAX_ORDER limitChristoph Lameter2007-05-151-1/+5
* slub: enable tracking of full slabsChristoph Lameter2007-05-071-0/+1
* SLUB: allocate smallest object size if the user asks for 0 bytesChristoph Lameter2007-05-071-2/+6
* SLUB coreChristoph Lameter2007-05-071-0/+201