diff options
author | Glauber Costa <glommer@parallels.com> | 2013-02-22 20:20:00 +0400 |
---|---|---|
committer | Pekka Enberg <penberg@kernel.org> | 2013-02-28 09:29:38 +0200 |
commit | 7d557b3cb69398d83ceabad9cf147c93a3aa97fd (patch) | |
tree | f1582b8912fe77bac85dc0e1cbac44acab1a7e7a | |
parent | b1e0541674904db00e17ce646b0a1d54f59136a5 (diff) | |
download | linux-stable-7d557b3cb69398d83ceabad9cf147c93a3aa97fd.tar.gz linux-stable-7d557b3cb69398d83ceabad9cf147c93a3aa97fd.tar.bz2 linux-stable-7d557b3cb69398d83ceabad9cf147c93a3aa97fd.zip |
slub: correctly bootstrap boot caches
After we create a boot cache, we may allocate from it until it is bootstraped.
This will move the page from the partial list to the cpu slab list. If this
happens, the loop:
list_for_each_entry(p, &n->partial, lru)
that we use to scan for all partial pages will yield nothing, and the pages
will keep pointing to the boot cpu cache, which is of course, invalid. To do
that, we should flush the cache to make sure that the cpu slab is back to the
partial list.
Signed-off-by: Glauber Costa <glommer@parallels.com>
Reported-by: Steffen Michalke <StMichalke@web.de>
Tested-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Pekka Enberg <penberg@kernel.org>
-rw-r--r-- | mm/slub.c | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/mm/slub.c b/mm/slub.c index 6184b0821f7e..aa0728daf8bb 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3552,6 +3552,12 @@ static struct kmem_cache * __init bootstrap(struct kmem_cache *static_cache) memcpy(s, static_cache, kmem_cache->object_size); + /* + * This runs very early, and only the boot processor is supposed to be + * up. Even if it weren't true, IRQs are not up so we couldn't fire + * IPIs around. + */ + __flush_cpu_slab(s, smp_processor_id()); for_each_node_state(node, N_NORMAL_MEMORY) { struct kmem_cache_node *n = get_node(s, node); struct page *p; |