diff options
author | David Rientjes <rientjes@google.com> | 2008-12-17 22:09:46 -0800 |
---|---|---|
committer | Pekka Enberg <penberg@cs.helsinki.fi> | 2008-12-29 11:40:58 +0200 |
commit | 7b8f3b66d9d7e5f021ae535620b9b52833f4876e (patch) | |
tree | 0495da27c549f9abd8bcb75c158edf20e35ca711 /mm | |
parent | dfcd3610289132a762b7dc0eaf33998262cd9e20 (diff) | |
download | linux-7b8f3b66d9d7e5f021ae535620b9b52833f4876e.tar.gz linux-7b8f3b66d9d7e5f021ae535620b9b52833f4876e.tar.bz2 linux-7b8f3b66d9d7e5f021ae535620b9b52833f4876e.zip |
slub: avoid leaking caches or refcounts on sysfs error
If a slab cache is mergeable and the sysfs alias cannot be added, the
target cache shall have its refcount decremented. kmem_cache_create()
will return NULL, so if kmem_cache_destroy() is ever called on the target
cache, it will never be freed if the refcount has been leaked.
Likewise, if a slab cache is not mergeable and the sysfs link cannot be
added, the new cache shall be removed from the slab_caches list.
kmem_cache_create() will return NULL, so it will be impossible to call
kmem_cache_destroy() on it.
Both of these operations require slub_lock since refcount of all slab
caches and slab_caches are protected by the lock.
In the mergeable case, it would be better to restore objsize and offset
back to their original values, but this could race with another merge
since slub_lock was dropped.
Cc: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/slub.c | 13 |
1 files changed, 11 insertions, 2 deletions
diff --git a/mm/slub.c b/mm/slub.c index 704cfa34f9ab..d057ceb3645f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3124,8 +3124,12 @@ struct kmem_cache *kmem_cache_create(const char *name, size_t size, s->inuse = max_t(int, s->inuse, ALIGN(size, sizeof(void *))); up_write(&slub_lock); - if (sysfs_slab_alias(s, name)) + if (sysfs_slab_alias(s, name)) { + down_write(&slub_lock); + s->refcount--; + up_write(&slub_lock); goto err; + } return s; } @@ -3135,8 +3139,13 @@ struct kmem_cache *kmem_cache_create(const char *name, size_t size, size, align, flags, ctor)) { list_add(&s->list, &slab_caches); up_write(&slub_lock); - if (sysfs_slab_add(s)) + if (sysfs_slab_add(s)) { + down_write(&slub_lock); + list_del(&s->list); + up_write(&slub_lock); + kfree(s); goto err; + } return s; } kfree(s); |